French foundation model developer Mistral AI and NVIDIA have released Mistral NeMo 12 billion, a new 12 billion parameter multilingual small language model (SML) designed for enterprise applications.
Mistral NeMo reportedly offers high performance across diverse tasks, such as chatbots, multilingual applications, coding, and summarization. It was developed by combining Mistral AI's expertise in training data with NVIDIA's optimized hardware and software ecosystem.
Analyst QuickTake: With this, Mistral AI joins the ranks of other companies such as Microsoft ( Phi-3 Mini ), Google (Gemini Nano), Stability AI (Stable LM 2 1.6B), and Apple ( OpenELM ) which has previously launched small language models (SLMs). This also coincides with OpenAI’s release of its SLM GPT-4o mini today. On a separate note, the company also launched two new models just a few days ago; Codestral Mamba for faster and longer code generation and Mathstral for math-related reasoning and scientific discovery.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.