Macro Notes

Macro Notes

The AI Invisibles

Paul's avatar
Paul
Jan 22, 2026
∙ Paid

Everyone’s buying NVIDIA. Everyone’s talking about ChatGPT. Everyone’s fantasizing about AGI replacing humans.

Meanwhile, three weeks ago, I watched a data center in Northern Virginia go dark for 47 minutes.

Not because of a cyberattack. Not because of a power grid failure. Because a single cooling system couldn’t handle the thermal load from a new cluster of H100 GPUs. The temperature spiked to 105°F. $2.3 million in compute sat idle while engineers scrambled to reroute coolant flow.

This is the story nobody tells you about the AI revolution.

The $7 Trillion Misdirection

Wall Street analysts keep revising their AI market projections upward. Goldman now says $7 trillion by 2030. Morgan Stanley says $15 trillion by 2035. Ark Invest says $200 trillion by 2050.

But here’s what they’re getting wrong: they’re measuring the wrong layer of the stack.

The market cap of OpenAI, Anthropic, Mistral, Cohere, and every other foundation model company combined won’t capture even 10% of AI’s economic value. The application layer—the sexy stuff everyone talks about—is a rounding error.

The real money, the generational wealth, the monopolistic moats? They’re being built in the infrastructure layer nobody sees.

Why Infrastructure Always Wins

Think about the California Gold Rush. Everyone remembers the prospectors. Nobody remembers their names.

You know who got rich? Levi Strauss selling denim. Samuel Brannan selling shovels. The guys who built the water systems, the logistics networks, the basic infrastructure.

In AI, the same pattern is repeating at trillion-dollar scale.

Every ChatGPT query you run requires:

  • 450 watts of power delivery per GPU

  • Precision cooling to within 2°C variance

  • Optical interconnects transferring 400Gbps between chips

  • Labeled training data verified by human annotators

  • Clean power backed up by diesel generators

  • Physical connectors rated for 600 amps continuous draw

OpenAI doesn’t build any of this. Google doesn’t build most of it. The hyperscalers are customers, not manufacturers.

The Three Invisible Choke Points

After spending six months analyzing the AI infrastructure stack—talking to data center engineers, supply chain specialists, and procurement managers at major cloud providers—I’ve identified three choke points where a handful of companies control access to AI at scale.

Choke Point #1: Thermal Management

Here’s a problem most investors don’t understand: modern AI chips are running into the laws of physics.

An H100 GPU generates 700 watts of heat in a package the size of your palm. Put 8 of them in a server, and you’re dealing with 5.6 kilowatts of thermal output—roughly the same as a commercial oven running at full blast.

Now put 1,000 of these servers in a data center. You’re managing 5.6 megawatts of continuous heat generation. For reference, that’s enough to heat 4,000 homes through a Chicago winter.

Traditional air cooling maxes out around 30 kilowatts per rack. AI clusters need 100+ kilowatts per rack. The math doesn’t work.

This is why liquid cooling is no longer optional—it’s existential. And there are exactly three companies in the world who can deliver precision liquid cooling at hyperscale.

The problem they solve: preventing thermal throttling that would reduce AI compute capacity by 40%. Without them, your $3 million GPU cluster runs like a $1.8 million cluster.

Choke Point #2: Data Labeling at Scale

Foundation models need data. Not just any data—meticulously labeled, verified, filtered data.

GPT-4 was trained on roughly 13 trillion tokens. But before those tokens could train anything, humans had to:

  • Verify factual accuracy

  • Filter toxic content

  • Label sentiment and intent

  • Validate logical reasoning

  • Classify domain expertise

  • Rate response quality

This isn’t a one-time job. Every model iteration needs new data. Every specialized model needs domain-specific labeling. Every safety update needs human verification.

The companies that built global networks of specialized annotators—subject matter experts, multilingual reviewers, domain specialists—control access to the fuel that powers AI.

The problem they solve: you can’t train a medical AI on random internet text. You need radiologists labeling X-rays, pathologists annotating tissue samples, clinicians validating treatment protocols. These companies built the infrastructure to do this at billion-example scale.

Choke Point #3: Power Interconnects

This is the most technical and least understood bottleneck.

When you connect 8 GPUs in a server, they need to communicate at 900GB/s. When you connect 256 GPUs in a cluster, you need 14.4 terabits per second of bandwidth between racks.

This requires physical connectors and cables that can:

  • Handle 600+ amps without resistance heating

  • Maintain signal integrity at 112Gbps per lane

  • Fit in rack spaces with <2mm clearance

  • Operate continuously for 5+ years without degradation

There are exactly two companies in the world manufacturing the high-amperage, high-bandwidth connectors that make modern AI clusters possible. Both are companies you’ve never heard of. Both have 18-month lead times. Both have gross margins above 60%.

The problem they solve: without proper power delivery and interconnects, your GPUs can’t communicate fast enough to train models. Your $100M data center becomes a very expensive space heater.

Why These Businesses Will Capture More Value Than OpenAI

Here’s the uncomfortable truth for AI equity investors:

Foundation model companies have terrible economics. They’re spending $500M+ on compute, paying researchers $1M+ salaries, and racing to zero on pricing. GPT-4 API pricing has dropped 97% in 18 months. Inference costs are collapsing.

Infrastructure companies have beautiful economics:

  • ✓ Long-term contracts (3-5 years typical)

  • ✓ Sticky relationships (can’t swap cooling vendors mid-deployment)

  • ✓ Recurring revenue (maintenance, upgrades, expanded capacity)

  • ✓ Pricing power (limited competition, high switching costs)

  • ✓ Capital-light models (many are asset-light service providers)

Every hyperscaler, every enterprise, every AI startup becomes a customer. They’re selling shovels in a gold rush where everyone needs to dig.


🔒 PREMIUM: The Actual Companies & How to Position Them

The following section contains:

  • Specific publicly-traded companies in each category

  • Current valuations and financial metrics (January 2026)

  • Position sizing recommendations based on risk profile

  • Catalysts and timing for entry

  • Red flags and competitive threats to monitor

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Macro Notes · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture