Microsoft Azure in the present day introduced the brand new NDv6 GB300 VM sequence, delivering the business’s first supercomputing-scale manufacturing cluster of NVIDIA GB300 NVL72 programs, purpose-built for OpenAI’s most demanding AI inference workloads.
This supercomputer-scale cluster options over 4,600 NVIDIA Blackwell Extremely GPUs linked by way of the NVIDIA Quantum-X800 InfiniBand networking platform. Microsoft’s distinctive programs strategy utilized radical engineering to reminiscence and networking to offer the large scale of compute required to realize excessive inference and coaching throughput for reasoning fashions and agentic AI programs.
In the present day’s achievement is the results of years of deep partnership between NVIDIA and Microsoft purpose-building AI infrastructure for the world’s most demanding AI workloads and to ship infrastructure for the subsequent frontier of AI. It marks one other management second, guaranteeing that modern AI drives innovation in america.
“Delivering the business’s first at-scale NVIDIA GB300 NVL72 manufacturing cluster for frontier AI is an achievement that goes past highly effective silicon — it displays Microsoft Azure and NVIDIA’s shared dedication to optimize all elements of the trendy AI knowledge heart,” mentioned Nidhi Chappell, company vp of Microsoft Azure AI Infrastructure.
“Our collaboration helps guarantee prospects like OpenAI can deploy next-generation infrastructure at unprecedented scale and pace.”
Contained in the Engine: The NVIDIA GB300 NVL72
On the coronary heart of Azure’s new NDv6 GB300 VM sequence is the liquid-cooled, rack-scale NVIDIA GB300 NVL72 system. Every rack is a powerhouse, integrating 72 NVIDIA Blackwell Extremely GPUs and 36 NVIDIA Grace CPUs right into a single, cohesive unit to speed up coaching and inference for enormous AI fashions.
The system gives a staggering 37 terabytes of quick reminiscence and 1.44 exaflops of FP4 Tensor Core efficiency per VM, creating a large, unified reminiscence area important for reasoning fashions, agentic AI programs and complicated multimodal generative AI.
NVIDIA Blackwell Extremely is supported by the full-stack NVIDIA AI platform, together with collective communication libraries that faucet into new codecs like NVFP4 for breakthrough coaching efficiency, in addition to compiler applied sciences like NVIDIA Dynamo for the best inference efficiency in reasoning AI.
The NVIDIA Blackwell Extremely platform excels at each coaching and inference. Within the latest MLPerf Inference v5.1 benchmarks, NVIDIA GB300 NVL72 programs delivered record-setting efficiency utilizing NVFP4. Outcomes included as much as 5x increased throughput per GPU on the 671-billion-parameter DeepSeek-R1 reasoning mannequin in contrast with the NVIDIA Hopper structure, together with management efficiency on all newly launched benchmarks just like the Llama 3.1 405B mannequin.
The Cloth of a Supercomputer: NVLink Change and NVIDIA Quantum-X800 InfiniBand
To attach over 4,600 Blackwell Extremely GPUs right into a single, cohesive supercomputer, Microsoft Azure’s cluster depends on a two-tiered NVIDIA networking structure designed for each scale-up efficiency inside the rack and scale-out efficiency throughout your complete cluster.
Inside every GB300 NVL72 rack, the fifth-generation NVIDIA NVLink Change cloth gives 130 TB/s of direct, all-to-all bandwidth between the 72 Blackwell Extremely GPUs. This transforms your complete rack right into a single, unified accelerator with a shared reminiscence pool — a important design for enormous, memory-intensive fashions.
To scale past the rack, the cluster makes use of the NVIDIA Quantum-X800 InfiniBand platform, purpose-built for trillion-parameter-scale AI. That includes NVIDIA ConnectX-8 SuperNICs and Quantum-X800 switches, NVIDIA Quantum-X800 gives 800 Gb/s of bandwidth per GPU, guaranteeing seamless communication throughout all 4,608 GPUs.
Microsoft Azure’s cluster additionally makes use of NVIDIA Quantum-X800’s superior adaptive routing, telemetry-based congestion management and efficiency isolation capabilities, in addition to NVIDIA Scalable Hierarchical Aggregation and Discount Protocol (SHARP) v4, which accelerates operations to considerably enhance the effectivity of large-scale coaching and inference.
Driving the Way forward for AI
Delivering the world’s first manufacturing NVIDIA GB300 NVL72 cluster at this scale required a reimagination of each layer of Microsoft’s knowledge heart — from customized liquid cooling and energy distribution to a reengineered software program stack for orchestration and storage.
This newest milestone marks an enormous step ahead in constructing the infrastructure that can unlock the way forward for AI. As Azure scales to its purpose of deploying lots of of hundreds of NVIDIA Blackwell Extremely GPUs, much more improvements are poised to emerge from prospects like OpenAI.
Be taught extra about this announcement on the Microsoft Azure weblog.