Reka has introduced Core, a frontier-class multimodal language model, to compete with OpenAI, Anthropic, and Google.
The model can understand images, videos, and audio alongside textual data. Additionally, it has a large context window of 128K, can reason and generate code, and is trained in 32 languages.
The company claims that the model facilitates diverse use cases, helping organizations simplify tasks and extract more value from their data, leading to cost savings.
Analyst QuickTake: This news comes two months after the company introduced a multimodal and multilingual language model, Reka Flash, which claimed to outperform OpenAI’s GPT-3.5 model. The new model launched today claims to be superior as it aims to compete with more advanced models like OpenAI’s GPT-4 and Anthropic’s Claude 3.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.