AI startup Reka has launched Yasa-1, a multimodal AI assistant that understands text, images, short videos, and audio snippets. It is available in private preview and can be customized on private datasets.
Yasa-1 leverages a single unified model named Yasa to deliver multimodal understanding and can process text, images, audio, and video. It supports 20 languages, processes long context documents, executes code, and can provide answers with context from the internet.
The key benefits of using Yasa-1 are its ability to understand and generate responses based on multiple modalities, support for various languages, processing long context documents, and executing code. The assistant aims to provide enterprises with a more comprehensive and customizable AI experience.
Analyst QuickTake: The chatbot comes as a direct competitor to OpenAI's ChatGPT, which also recently received its own multimodal enhancement , enabling it to handle visual and audio prompts.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.