Reka is a developer of multimodal models and an AI assistant, founded by researchers from DeepMind, Google, and Meta in 2022. Reka's flagship application, “Yasa-1”, is a multimodal AI assistant based on its foundation model, Yasa. It is capable of understanding images, short videos, and audio snippets. In public beta (as of February 2024) , Yasa-1 can be customized for various use cases using private datasets and supports 20 languages. Users can combine text-based prompts with multimedia to obtain more specific answers, such as generating social media posts from product images or identifying sounds, with context from the internet. Yasa-1 can even analyze videos, providing insights into discussed topics and predicting future actions. However, according to the company, the technology has some limitations, especially with intricate details in multimedia content and potential hallucinations, like other LLMs.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.