All Updates

All Updates

icon
Filter
Product updates
AI21 launches Jamba 1.5 Mini and Jamba 1.5 Large models for long-context language processing
Foundation Models
Aug 22, 2024
This week:
Product updates
Hexagon unveils Advanced Compensation for metal 3D printing
Additive Manufacturing
Yesterday
Funding
Eden AI raises EUR 3 million in seed funding to accelerate product development
Generative AI Infrastructure
Nov 21, 2024
M&A
Wiz acquires Dazz to expand cloud security remediation capabilities
Next-gen Cybersecurity
Nov 21, 2024
Partnerships
Immutable partners with Altura to enhance Web3 game development and marketplace solutions
Web3 Ecosystem
Nov 21, 2024
Funding
OneCell Diagnostics raises USD 16 million in Series A funding to enhance cancer diagnostics
Precision Medicine
Nov 21, 2024
Partnerships
BioLineRx and Ayrmid partner to license and commercialize APHEXDA across multiple indications
Precision Medicine
Nov 21, 2024
Product updates
SOPHiA GENETICS announces global launch of MSK-IMPACT powered with SOPHiA DDM
Precision Medicine
Nov 21, 2024
Product updates
Biofidelity launches Aspyre Clinical Test for lung cancer detection
Precision Medicine
Nov 21, 2024
Partnerships
Spendesk partners with Adyen to enhance SMB spend management with banking-as-a-service solution
Business Expense Management
Nov 21, 2024
M&A
Mews acquires Swedish RMS provider Atomize to enhance Hospitality Cloud platform
Travel Tech
Nov 21, 2024
Foundation Models

Foundation Models

Aug 22, 2024

AI21 launches Jamba 1.5 Mini and Jamba 1.5 Large models for long-context language processing

Product updates

  • AI21, a developer of foundation models, has introduced Jamba 1.5 Mini and Jamba 1.5 Large to offer high performance and efficiency for long-context language processing.

  • The models have a hybrid architecture combining Transformer and Mamba approaches, allowing quality responses with large context windows. Jamba 1.5 Large is a mixture-of-experts model with 398 billion total parameters and 94 billion active parameters, while Jamba 1.5 Mini is an enhanced version of Jamba-instruct. Moreover, both models have a context window of 256K tokens.

  • The company claims that the models outperform competitors in end-to-end latency tests. They are optimized for building RAG and agentic workflows, making them suitable for complex, data-heavy tasks in enterprise environments.

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.