Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Top News
Top News

Nvidia Unveils Groundbreaking DGX Superpod With GB200 Grace Blackwell

Illustration picture of AMD chip

NVIDIA has introduced its latest AI supercomputer, the Nvidia DGX SuperPOD, featuring the new Nvidia GB200 Grace Blackwell Superchip. This cutting-edge system is tailored for processing trillion-parameter models with continuous uptime for superscale generative AI training and inference workloads.

The DGX SuperPOD showcases a liquid-cooled rack-scale architecture, delivering 11.5 exaflops of AI supercomputing at FP4 precision and 240 terabytes of fast memory, expandable with additional racks. The Nvidia GB200 Superchip, a new AI accelerator, is designed to meet the rigorous demands of generative AI training and inference workloads involving trillion-parameter models.

The GB200 Superchip integrates 36 Nvidia Arm-architecture Grace CPUs and 72 Nvidia Blackwell GPUs, enhancing performance for processing complex AI workloads efficiently. Interconnected via fifth-generation Nvidia NVLink, the GB200 Superchips in a DGX GB200 system operate seamlessly as a unified supercomputer, facilitating high-speed data transfer between CPUs and GPUs.

Noteworthy is the GB200 Superchip's capability to deliver up to 30 times the performance of Nvidia’s current leading H100 Tensor Core GPU for large language model inference tasks, marking a significant advancement in AI supercomputing.

The DGX SuperPOD, equipped with Nvidia DGX GB200 systems, offers 11.5 exaFLOPS of AI supercomputing power at FP4 precision and 240 terabytes of fast memory, scalable by adding more racks. Each DGX GB200 system comprises 36 Nvidia GB200 Superchips, connected via fifth-generation Nvidia NVLink.

The SuperPOD can scale to tens of thousands of GB200 Superchips connected via NVIDIA Quantum InfiniBand, providing a vast shared memory space for next-generation AI models. The architecture includes Nvidia BlueField-3 DPUs and supports Nvidia Quantum-X800 InfiniBand networking, enhancing in-network computing performance.

The DGX SuperPOD features a highly efficient, liquid-cooled architecture that optimizes performance while minimizing thermal constraints, ensuring sustainable and energy-efficient operations even under heavy computational loads.

Expected to be available later this year through NVIDIA’s global partners, the DGX SuperPOD with DGX GB200 and DGX B200 systems is set to revolutionize AI supercomputing, offering unparalleled computational power and efficiency for handling complex AI workloads.

Collaborations with Oracle, Google, Microsoft, and AWS further extend the reach of the new platform, showcasing its potential to drive AI innovation across industries and solidifying Nvidia's position as a leader in high-performance computing for AI.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.