Macro Notes

Macro Notes

The $50,000 Problem Nobody's Pricing Into AI Stocks

Macro Notes's avatar
Macro Notes
Dec 28, 2025
∙ Paid

Two weeks ago, I was reviewing NVIDIA’s latest earnings call when their VP of Data Center Engineering mentioned something that made me pause mid-sentence.

He casually noted that a single Blackwell Ultra rack now consumes up to 140 kilowatts of power—roughly equivalent to powering 100 average American homes. But that wasn’t what caught my attention. It was what came next: “The cooling system for a single Nvidia Blackwell Ultra NVL72 rack costs a staggering $50,000.”

Not the chips. Not the servers. The cooling.

I started pulling data center earnings transcripts and infrastructure reports. The pattern was everywhere once I knew where to look. Cooling systems account for 7% to over 30% of total data center energy consumption, with AI-optimized facilities at the high end of that range. The global data center cooling market hit $14.21 billion in 2024 and is projected to reach $34.12 billion by 2033—a 140% increase in less than a decade.

But here’s the disconnect that stopped me cold: while everyone obsesses over which AI chip will win or which cloud provider will dominate, there’s a $34 billion infrastructure crisis building that almost nobody is talking about.

NVIDIA’s Blackwell chips dissipate up to 1,400 watts per GPU. In liquid-cooled configurations, these chips can generate 1,200 watts of thermal energy when running at full capacity. Traditional air cooling—the backbone of data centers for 40 years—physically cannot handle this heat load anymore.

The entire industry is being forced into the most dramatic infrastructure transition since data centers were invented. And it’s happening right now.

Liquid cooling held 46% of the data center cooling market in 2024, up from essentially zero five years ago. But implementation is brutally constrained. Market growth in 2023 was primarily limited by production capacity for components like Cooling Distribution Units (CDUs), not demand. Lead times stretch 6-12 months. Customers are paying premium prices just to secure allocation.

This isn’t a semiconductor story. It’s an infrastructure bottleneck that every AI investment depends on.

While Wall Street analysts debate which AI models will win or whether NVIDIA’s moat is sustainable, a handful of companies are quietly:

  • Securing sole-source or duopoly positions in critical cooling technologies

  • Raising prices 20-40% annually as demand overwhelms supply capacity

  • Building multi-year backlogs with Fortune 100 customers desperate for solutions

  • Trading at 12-18x earnings because markets categorize them as “industrial equipment manufacturers”

The mispricing is extraordinary.

In this deep dive, I’m going to walk you through:

  • Why traditional cooling is physically obsolete for AI workloads (and why this transition is irreversible)

  • The three types of advanced cooling dominating new data center builds—and which companies control each technology

  • The specific companies positioned at chokepoints in liquid cooling infrastructure, with pricing power that rivals monopolies

  • Why this opportunity is invisible to most investors (and why that’s about to change)

The AI infrastructure build-out everyone’s betting on? It literally cannot happen without solving the cooling crisis first.

And the companies solving it are trading like they make commodity HVAC equipment.

Let’s dig in.

My Three Positions in the AI Cooling Bottleneck

If you understand the thesis—that liquid cooling infrastructure is the actual bottleneck in AI deployment, while the companies enabling it trade at industrial multiples—you’ll see why these three positions are positioned to dramatically outperform.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Macro Notes · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture