All Updates

All Updates

icon
Filter
Product updates
Meta introduces Chameleon, an early-fusion multimodal AI model
Foundation Models
May 22, 2024
This week:
Funding
GrayMatter Robotics raises USD 45 million in Series B funding to accelerate AI-powered robotics solutions
Smart Factory
Yesterday
Funding
Vecna Robotics raises USD 100 million in Series C funding; appoints new COO
Logistics Tech
Yesterday
Funding
Vecna Robotics raises USD 100 million in Series C funding; appoints new COO
Smart Factory
Yesterday
Funding
FairNow raises USD 3.5 million to advance AI governance solutions
Generative AI Infrastructure
Yesterday
Partnerships
Gravitics develops testing gauntlet for larger spacecraft in collaboration with NASA
Space Travel and Exploration Tech
Yesterday
M&A
knownwell acquires Alfie Health to integrate AI in primary and obesity care services
Telehealth
Yesterday
Funding
Pomelo Care raises USD 46 million in Series B funding to expand virtual maternal care
Telehealth
Yesterday
Funding
Isar Aerospace raises EUR 65 million, backed by NATO Innovation Fund
Space Travel and Exploration Tech
Yesterday
Product updates
Beyond Meat releases new Beyond Sausage, expanding its Beyond IV product line
Plant-based Meat
Yesterday
Product updates
Funding
SurrealDB raises USD 20 million in Series A; launches beta version of Surreal Cloud
Data Infrastructure & Analytics
Yesterday
Foundation Models

Foundation Models

May 22, 2024

Meta introduces Chameleon, an early-fusion multimodal AI model

Product updates

  • Meta has introduced Chameleon, an early-fusion multimodal AI model. The model is currently in preview and has not been officially released.

  • The Chameleon models claim to be proficient in image captioning and visual question answering and retain competitive roles in text-only tasks. 

  • The model operates with an "early-fusion token-based mixed-modal" architecture, built to learn from a combined mixture of images, text, and code among others. Additionally, it delivers encoding and decoding in a unified token space, allowing the model to generate and reason over sequences that include text and images without requiring modality-specific components.

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.