French AI startup Mistral AI has partnered with IBM to make its open-source Mixtral-8x7B large language model available on IBM’s AI and data platform watsonx.
The partnership offers an optimized version of Mixtral-8x7B that claims to reduce processing time by 35-75%. IBM leverages the quantization process which reduces model sizes and memory requirements, resulting in cost and energy savings.
IBM also made Meta's open-source models Llama-2-13B-chat and Llama-2-70B-chat, ELYZA-japanese-Llama-2-7b, and other third-party models available on watsonx this week.
Analyst Quicktake: This February Mistral AI also entered into partnerships with Microsoft and Amazon to make its models available in Microsoft Azure and Amazon Bedrock cloud platforms to improve accessibility of models.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.