Meta unveiled the next generation of its AI infrastructure program, Meta Training and Inference Accelerator (MTIA). The new AI chip, custom-made to meet the company's specific AI needs, will power existing and future products.
MTIA's features include improved compute power and memory bandwidth, an efficient architecture supporting GenAI applications, recommendation systems, and advanced AI research. The company states that several programs are underway to expand the scope of MTIA, including support for GenAI workloads.
It also offers increased power and efficiency, doubling the compute and memory bandwidth compared to its predecessor. This enhancement enables it to power Meta's platform content personalization capabilities.
It offers hardware enhancements, including a rack-based design capable of housing up to 72 accelerators, a faster clocking speed, and upgraded fabric between the accelerators and the host. Additionally, it provides performance benefits by integrating with PyTorch 2.0 and its related software features.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.