NVIDIA has unveiled the HGX H200, an AI chip, as an upgrade to the H100. The H200 claims 1.4x memory bandwidth and 1.8x memory capacity, enhancing its capability for intensive GenAI work. The first H200 chips are expected to be released in Q2 2024.
The H200 is a GPU designed for AI work with key improvements in memory, utilizing the new HBM3e memory spec for faster performance. Its memory bandwidth is increased to 4.8 TB per second and the total memory capacity is raised to 141 GB.
The H200 maintains compatibility with systems supporting H100s, and major cloud providers, like Amazon, Google, Microsoft, and Oracle, plan to offer these new GPUs.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.