All updates

All updates

icon
Filter
Product updates
AI21 launches Jamba 1.5 Mini and Jamba 1.5 Large models for long-context language processing
Foundation Models
Aug 22, 2024
This week:
Orijin raises seed funding for product development and expansion
Conservation Tech
Yesterday
Orijin raises seed funding for product development and expansion
Smart Farming
Yesterday
Product updates
Perplexity adds OpenAI o1 model and develops homepage widgets
Foundation Models
Yesterday
Partnerships
Nikola partners with WattEV to supply 22 BEVs
Truck Industry Tech
Yesterday
M&A
Zebra Technologies to acquire Photoneo from Photoneo Brightpick Group for undisclosed sum
Logistics Tech
Yesterday
Funding
Scope Technologies increases private placement offering to CAD 1.8 million
Machine Learning Infrastructure
Yesterday
Funding
Firefly Neuroscience raises USD 12.4 million in growth funding to commercialize technology
AI Drug Discovery
Dec 31, 2024
Listing
Nasdaq affirms delisting of OpGen after failed appeal
Precision Medicine
Dec 31, 2024
Funding
Rumble raises USD 775 million in strategic investment to support growth
Creator Economy
Dec 31, 2024
Product updates
InstaDeep releases open-source genomics AI model Nucleotide Transformers
Foundation Models
Dec 31, 2024
Foundation Models

Foundation Models

Aug 22, 2024

AI21 launches Jamba 1.5 Mini and Jamba 1.5 Large models for long-context language processing

Product updates

  • AI21, a developer of foundation models, has introduced Jamba 1.5 Mini and Jamba 1.5 Large to offer high performance and efficiency for long-context language processing.

  • The models have a hybrid architecture combining Transformer and Mamba approaches, allowing quality responses with large context windows. Jamba 1.5 Large is a mixture-of-experts model with 398 billion total parameters and 94 billion active parameters, while Jamba 1.5 Mini is an enhanced version of Jamba-instruct. Moreover, both models have a context window of 256K tokens.

  • The company claims that the models outperform competitors in end-to-end latency tests. They are optimized for building RAG and agentic workflows, making them suitable for complex, data-heavy tasks in enterprise environments.

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.