NVIDIA has launched its latest GPU, Blackwel GB200l, offering 25x less cost and energy consumption than its predecessors', the H100 and A100 GPUs. The Blackwell GPU consists of 208 billion transistors (2.5x more transistors than its H100 GPU) and is manufactured using a custom-built 4NP TSMC process, which enables AI training and real-time LLM inference for models with up to 10 trillion parameters.
The GPU incorporates a dedicated engine, providing AI-based preventative maintenance to conduct diagnostics and forecast reliability issues at the chip level. It also features advanced confidential computing capabilities to safeguard AI models and customer data without compromising performance. Additionally, it features a decompression engine supporting the latest formats and accelerating database queries to ensure high performance in data analytics and data science workflows.
Furthermore, NVIDIA has developed a superchip named the "NVIDIA GB200 Grace Blackwell Superchip," which links two NVIDIA B200 GPUs to the NVIDIA Grace CPU.
Additionally, it has introduced a complete rack system, the "GB200 NVL72," which integrates 36 GB200 Superchips and 72 Blackwell GPUs. This system provides 720 petaflops of training performance and 1.4 exaflops of inferencing performance and can accommodate model sizes of up to 27 trillion parameters.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.