The Technology Innovation Institute (TII) has launched Falcon 2 11B, a large language model, and Falcon 2 11B VLM, a vision-to-language model with multilingual capabilities.
Falcon 2 11B is trained on 5.5 trillion tokens with 11 billion parameters and enables the conversion of visual inputs into textual outputs that can be utilized in document management, digital archiving, and context indexing to support individuals with visual impairments.
Furthermore, these models are claimed to be able to run efficiently on one graphics processing unit (GPU), making them easy to deploy and integrate into lighter infrastructures like laptops and other devices.
The models can handle tasks in English, French, Spanish, German, Portuguese, and various other languages.
Falcon 2 11B is claimed to outperform Meta’s LLaMA 3 and performs on par with Google’s Gemma 7B, verified by Hugging Face.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.