Subscriber Exclusive
I recorded a conversation last week with JP Buzzell, VP and Data Center Chief Architect at Eaton. The podcast drops for everyone in about a week. Paying subscribers are getting the investment call first.
“The United States is scarce on energy.”
That was Jensen Huang on the Dwarkesh Podcast two weeks ago, in a ninety-minute conversation that the market mostly read as a China export-controls debate. The more important framing sits three sentences later in the same exchange. Jensen laid out what he called the five-layer cake of AI, and at the bottom of the cake, below chips and models and applications, is energy. His argument: when you have an abundance of energy, it makes up for chips. When you have an abundance of chips, it makes up for energy. China has the first condition. America has the second. That structural asymmetry is the most important framing the market has been handed in months.
The Watt Asymmetry
The market read the Dwarkesh interview as a chip-export argument. The structural read is that Jensen just named the investable thesis of the next three years. China’s abundance of energy lets it run older chips in parallel. America’s scarcity of energy forces it to build the most efficient stack in the world. The investable question is not who wins the chip race. It is who builds the stack that makes scarce American watts go furthest.
JP is the primary source behind this piece. He operated commercial reactors in the US Navy for twelve years, taught nuclear operations for three, designed critical facilities at Meta and the first Zettascale data hall at Oracle, and recently co-led the release of the OCP DCF Power Distribution LVDC white paper: a 155-page document 200 companies signed onto as the first common-language framework for the power stack at AI scale. His frame on the physical layer and Jensen’s frame on the five-layer cake are the same argument seen from opposite ends of the telescope.
The Hyperscalers Just Confirmed the Budget
Forty-eight hours ago, the demand side of the Watt Asymmetry got marked. Microsoft, Meta, Alphabet, and Amazon all reported Q1 2026 on April 29 and all four either reaffirmed or raised their AI infrastructure capex commitments. Per Jefferies post-call consensus, the combined 2026 number is now $693 billion, up 13% in 48 hours from $613 billion pre-print. The 2027 consensus moved by the same percentage, from $723 billion to $821 billion. This is the largest two-year concentrated infrastructure cycle in technology history, and it just got bigger.
Alphabet raised 2026 capex to $180-190 billion, up from $175-185 billion, and CFO Anat Ashkenazi guided 2027 capex to “significantly increase” from there. Google Cloud revenue grew 63% to $20 billion. Microsoft guided full-year capex to roughly $190 billion with Q4 alone above $40 billion, and CFO Amy Hood said the company expects to remain capacity-constrained through all of 2026. Meta raised its 2026 capex range to $125-145 billion from $115-135 billion, citing higher component pricing (memory, primarily) and additional data center costs to support future-year capacity. Amazon projected 2026 capex of approximately $200 billion, with AWS revenue up 28% year over year, the fastest growth in fifteen quarters. The Watt Asymmetry is no longer a forecast. It is a budget. Every dollar in those numbers eventually becomes a watt that has to land at one megawatt density, in a refrigerator-sized space, in an American jurisdiction where the grid will not interconnect new load for eight years.
Downstream Design
JP’s central claim is a causal chain the investment community has been modeling backwards. Software drives network protocol. Network protocol drives rack density. Rack density drives cooling architecture. Cooling architecture drives the transition to direct current. The stack is downstream, not upstream. Most PMs model the AI build-out from the chip down. JP models it from the workload down.
The number that makes this concrete is rack power density. Hyperscale halls a few years ago ran 10 kilowatts per rack. Zettascale pushed that to over 100 kilowatts. Blackwell NVL72 is already at 120 kilowatts. Rubin brings 300 kilowatts. Feynman-generation reference designs target 600 kilowatts to one megawatt per rack. That is not a five-year ramp. That is a three-year ramp. One megawatt per rack is the same power as three hundred American homes, delivered into a space smaller than a refrigerator, over copper and optics that have to survive inside the same enclosure.
At that density, every architectural assumption in the hyperscale handbook breaks. You cannot deliver a megawatt at 48 volts, so the rack has to move to 800V DC. Run the arithmetic: at 48V, one megawatt per rack requires roughly 20,800 amps of current, the equivalent of the main breakers of 200 American homes running simultaneously through a single rack’s power path. NVIDIA’s own estimate is 200 kilograms of copper busbar per rack at that architecture, the weight of three adult men concentrated into the power spine of one cabinet. You cannot cool a megawatt with air, so liquid cooling becomes mandatory. You cannot run copper interconnects at the lengths and bandwidths required without bleeding watts to resistive loss, so optics have to move onto the rack itself. This is the physical expression of the Watt Asymmetry. If American power is scarce and American racks are running at a megawatt, the only path that works is extreme co-design across every layer simultaneously. China does not need to do this. As Jensen pointed out on Dwarkesh, if your watts are abundant, you can keep using older chips in parallel, and 7-nanometer chips are essentially Hopper anyway.
The Industrial Answer: DSX, Omniverse, Emerald AI
If the Watt Asymmetry is the problem, NVIDIA’s platform stack is the answer, and it runs three layers deep. At the physical layer sits the DSX Beam design, Eaton’s implementation of NVIDIA’s DSX blueprint, engineered for both current Blackwell racks and the coming Rubin and Feynman generations, and engineered for both legacy AC and the 800V DC architecture that becomes mandatory above 600 kilowatts per rack. Above DSX sits NVIDIA Omniverse, which lets an operator physically simulate the full facility, power and cooling and optical routing, before a single spade breaks ground. Above Omniverse sits Emerald AI, the orchestration layer JP described as good grid citizen behavior: finding stranded capacity on the grid, routing workloads in real time, and delivering fault ride-through that keeps a training run alive through waveform events.
The OCP Paper as Signal
Just before the Dwarkesh interview, JP co-led the release of the OCP DCF Power Distribution LVDC white paper. 155 pages. 200 companies aligned. The paper itself is not the catalyst. The signal is. Two hundred companies across silicon, cloud, and power OEMs coordinated on a common language for 800V DC because every one of them is about to spend real capex building against it. Version 1.1 focuses specifically on rack and power distribution, which is the layer where the transition actually lives. Coordinated standards arrive before coordinated capital, and the capital is coming.
Below the paywall: the named companies at every layer, the three names that carry the thesis, the EV supply chain that quietly built the 800V backbone before hyperscalers needed it, what Eaton’s May 5 print has to show, the Bloom Q1 numbers Wall Street undercounted, and the short thesis that would force me to cover.





Leave a Reply