AI21, a developer of foundation models, has introduced Jamba 1.5 Mini and Jamba 1.5 Large to offer high performance and efficiency for long-context language processing.
The models have a hybrid architecture combining Transformer and Mamba approaches, allowing quality responses with large context windows. Jamba 1.5 Large is a mixture-of-experts model with 398 billion total parameters and 94 billion active parameters, while Jamba 1.5 Mini is an enhanced version of Jamba-instruct. Moreover, both models have a context window of 256K tokens.
The company claims that the models outperform competitors in end-to-end latency tests. They are optimized for building RAG and agentic workflows, making them suitable for complex, data-heavy tasks in enterprise environments.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.