AMD has introduced two new products, the Instinct MI300X accelerator and the Instinct M1300A accelerated processing unit (APU), targeting the market for large language models (LLMs).
The MI300X claims to be the highest-performing accelerator globally, with 1.5x more memory capacity than its predecessor. MI300A APU for data centers offers higher-performance computing, faster model training, and a 30 times energy efficiency improvement.
AMD claims that the MI300X is comparable to NVIDIA’s H100 in training LLMs but outperforms on the inference side, specifically 1.4x better when working with Meta's Llama 2.
Additionally, AMD released the Ryzen 8040 series, offering 1.6x more AI processing performance, integrating neural processing units (NPUs), and providing improvements in video editing and gaming speed.
Analyst QuickTake: This launch builds on AMD's efforts, including its long-term partnership with Microsoft to challenge the dominance of NVIDIA in the AI chip space. In November 2023 , AMD partnered with Microsoft to deploy MI300X in Azure virtual machines (VMs) to provide customers with greater AI training choices.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.