Monday, October 13, 2025

NVIDIA, Companions Drive Subsequent-Gen Environment friendly Gigawatt AI Factories in Buildup for Vera Rubin


NVIDIA, Companions Drive Subsequent-Gen Environment friendly Gigawatt AI Factories in Buildup for Vera Rubin

On the OCP International Summit, NVIDIA is providing a glimpse into the way forward for gigawatt AI factories.

NVIDIA will unveil specs of the NVIDIA Vera Rubin NVL144 MGX-generation open structure rack servers, which greater than 50 MGX companions are gearing up for together with ecosystem help for NVIDIA Kyber, which connects 576 Rubin Extremely GPUs, constructed to help growing inference calls for.

Some 20-plus trade companions are showcasing new silicon, elements, energy programs and help for the next-generation, 800-volt direct present (VDC) knowledge facilities of the gigawatt period that may help the NVIDIA Kyber rack structure.

Foxconn supplied particulars on its 40-megawatt Taiwan knowledge middle, Kaohsiung-1, being constructed for 800 VDC. CoreWeave, Lambda, Nebius, Oracle Cloud Infrastructure and Collectively AI are amongst different trade pioneers designing for 800-volt knowledge facilities. As well as, Verb unveiled its space-, cost- and energy-efficient 800 VDC MGX reference structure, a whole energy and cooling infrastructure structure. HPE is asserting product help for NVIDIA Kyber in addition to NVIDIA Spectrum-XGS Ethernet scale-across expertise, a part of the Spectrum-X Ethernet platform.

Shifting to 800 VDC infrastructure from conventional 415 or 480 VAC three-phase programs affords elevated scalability, improved vitality effectivity, lowered supplies utilization and better capability for efficiency in knowledge facilities. The electrical car and photo voltaic industries have already adopted 800 VDC infrastructure for related advantages.

The Open Compute Undertaking, based by Meta, is an trade consortium of a whole bunch of computing and networking suppliers and extra centered on redesigning {hardware} expertise to effectively help the rising calls for on compute infrastructure.

Vera Rubin NVL144: Designed to Scale for AI Factories

The Vera Rubin NVL144 MGX compute tray affords an energy-efficient, 100% liquid-cooled, modular design. Its central printed circuit board midplane replaces conventional cable-based connections for sooner meeting and serviceability, with modular growth bays for NVIDIA ConnectX-9 800GB/s networking and NVIDIA Rubin CPX for massive-context inference.

The NVIDIA Vera Rubin NVL144 affords a serious leap in accelerated computing structure and AI efficiency. It’s constructed for superior reasoning engines and the calls for of AI brokers.

Its basic design lives within the MGX rack structure and will likely be supported by 50+ MGX system and part companions. NVIDIA plans to contribute the upgraded rack in addition to the compute tray improvements as an open commonplace for the OCP consortium.

Its requirements for compute trays and racks allow companions to combine and match in modular style and scale sooner with the structure. The Vera Rubin NVL144 rack design options energy-efficient 45°C liquid cooling, a brand new liquid-cooled busbar for greater efficiency and 20x extra vitality storage to maintain energy regular.

The MGX upgrades to compute tray and rack structure increase AI manufacturing facility efficiency whereas simplifying meeting, enabling a fast ramp-up to gigawatt-scale AI infrastructure.

NVIDIA is a number one contributor to OCP requirements throughout a number of {hardware} generations, together with key parts of the NVIDIA GB200 NVL72 system electro-mechanical design. The identical MGX rack footprint helps GB300 NVL72 and can help Vera Rubin NVL144, Vera Rubin NVL144 CPX and Vera Rubin CPX for greater efficiency and quick deployments.

If You Construct It, They Will Come: NVIDIA Kyber Rack Server Technology

The OCP ecosystem can also be getting ready for NVIDIA Kyber, that includes improvements in 800 VDC energy supply, liquid cooling and mechanical design.

These improvements will help the transfer to rack server era NVIDIA Kyber — the successor to NVIDIA Oberon — which is able to home a high-density platform of 576 NVIDIA Rubin Extremely GPUs by 2027.

The simplest option to counter the challenges of high-power distribution is to extend the voltage. Transitioning from a standard 415 or 480 VAC three-phase system to an 800 VDC structure affords numerous advantages.

The transition afoot allows rack server companions to maneuver from 54 VDC in-rack elements to 800 VDC for higher outcomes. An ecosystem of direct present infrastructure suppliers, energy system and cooling companions, and silicon makers — all aligned on open requirements for the MGX rack server reference structure — attended the occasion.

NVIDIA Kyber is engineered to spice up rack GPU density, scale up community measurement and maximize efficiency for large-scale AI infrastructure. By rotating compute blades vertically, like books on a shelf, Kyber allows as much as 18 compute blades per chassis, whereas purpose-built NVIDIA NVLink swap blades are built-in on the again by way of a cable-free midplane for seamless scale-up networking.

Over 150% extra energy is transmitted by means of the identical copper with 800 VDC, enabling eliminating the necessity for 200-kg copper busbars to feed a single rack.

Kyber will develop into a foundational ingredient of hyperscale AI knowledge facilities, enabling superior efficiency, effectivity and reliability for state-of-the-art generative AI workloads within the coming years. NVIDIA Kyber racks supply a manner for purchasers to cut back the quantity of copper they use by the tons, resulting in thousands and thousands of {dollars} in price financial savings.

NVIDIA NVLink Fusion Ecosystem Expands

Along with {hardware}, NVIDIA NVLink Fusion is gaining momentum, enabling firms to seamlessly combine their semi-custom silicon into extremely optimized and broadly deployed knowledge middle structure, decreasing complexity and accelerating time to market.

Intel and Samsung Foundry are becoming a member of the NVLink Fusion ecosystem that features {custom} silicon designers, CPU and IP companions, in order that AI factories can scale up shortly to deal with demanding workloads for mannequin coaching and agentic AI inference.

  • As a part of the not too long ago introduced NVIDIA and Intel collaborationIntel will construct x86 CPUs that combine into NVIDIA infrastructure platforms utilizing NVLink Fusion.
  • Samsung Foundry has partnered with NVIDIA to satisfy rising demand for {custom} CPUs and {custom} XPUs, providing design-to-manufacturing expertise for {custom} silicon.

It Takes an Open Ecosystem: Scaling the Subsequent Technology of AI Factories

Greater than 20 NVIDIA companions are serving to ship rack servers with open requirements, enabling the longer term gigawatt AI factories.

Study extra about NVIDIA and the Open Compute Undertaking on the OCP International Summithappening on the San Jose Conference Heart from Oct. 13-16.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles