All Updates

All Updates

icon
Filter
Product updates
Meta introduces Chameleon, an early-fusion multimodal AI model
Foundation Models
May 22, 2024
This week:
Robinhood launches joint investment accounts
Retail Trading Infrastructure
Yesterday
Partnerships
eToro partners with London Stock Exchange to expand UK stock offerings
Retail Trading Infrastructure
Yesterday
Funding
StorMagic secures funding from Palatine Growth Credit Fund
Edge Computing
Yesterday
Funding
Archera raises USD 17 million in Series B funding for product development and recruitment
Cloud Optimization Tools
Yesterday
Funding
Alto Neuroscience receives grant of USD 11.7 million to support Phase IIb clinical trials of ALTO-100
Precision Medicine
Yesterday
Partnerships
Quest Diagnostics and BD partner to develop flow cytometry-based companion diagnostics for cancer and other diseases
Precision Medicine
Yesterday
Product updates
USPACE Technology Group Limited unveils commercial optical satellites and related aerospace products
Next-gen Satellites
Yesterday
Industry news
Sweden issues study on Gripen fighter jet’s satellite launch capability
Next-gen Satellites
Yesterday
Product updates
Regulation/policy
Terran Orbital receives certification for new manufacturing facility to begin production
Next-gen Satellites
Yesterday
Partnerships
Crisalion Mobility partners with Air Chateau for pre-order of eVTOL aircraft
Passenger eVTOL Aircraft
Yesterday
Foundation Models

Foundation Models

May 22, 2024

Meta introduces Chameleon, an early-fusion multimodal AI model

Product updates

  • Meta has introduced Chameleon, an early-fusion multimodal AI model. The model is currently in preview and has not been officially released.

  • The Chameleon models claim to be proficient in image captioning and visual question answering and retain competitive roles in text-only tasks. 

  • The model operates with an "early-fusion token-based mixed-modal" architecture, built to learn from a combined mixture of images, text, and code among others. Additionally, it delivers encoding and decoding in a unified token space, allowing the model to generate and reason over sequences that include text and images without requiring modality-specific components.

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.