AI Capex
Technology/Power & Cooling

AI Power & Cooling

Liquid cooling · PDUs · power semiconductors · UPS · SiC/GaN. The decade-long capex wave nobody is talking about — until a 120 kW rack needs cooling.

Updated April 2026 · 8 min read


What It Is

Every watt consumed by an AI GPU must be (a) delivered from the utility grid through a power chain and (b) removed as heat. The power chain runs from grid → transformer → UPS → switchgear → PDU → server power supply → VRMs (voltage regulator modules) on the GPU board. The thermal chain runs from GPU die → cold plate (liquid cooling) or heat sink (air cooling) → cooling distribution unit (CDU) → cooling tower or chiller → atmosphere.

The physics that drove this to the top of the AI infrastructure agenda: a 2020-era server rack consumed 10–15 kW. A 2024 air-cooled GPU rack consumed 30–40 kW, near the limit of air cooling. A 2025 NVIDIA GB200 NVL72 rack consumes 120 kW — requiring liquid cooling as a physical requirement, not an option. The entire data center design stack must be rebuilt for this power density: wider busbars, larger UPS banks, direct liquid cooling infrastructure, and on-site power substations.

Why It Matters for AI Capex

Hyperscalers are spending $300B+ on AI infrastructure in 2026 — but semiconductors are only part of the bill. Industry estimates suggest 30–40% of a new AI data center's total construction cost is power and cooling infrastructure. At $300B total hyperscaler capex, that implies $90–120B flowing to power and cooling vendors — larger than the entire WFE equipment market.

The power constraint extends beyond the data center: utilities in Northern Virginia (the world's largest data center market) are quoting 5–7 year interconnection timelines for new substations. Microsoft, Google, and Amazon are signing power purchase agreements (PPAs) for nuclear and natural gas generation to guarantee supply. Vertiv (VRT), the primary liquid cooling and power infrastructure vendor, has a backlog of $8B+ — more than 1.5× its annual revenue — and its stock has been one of the best-performing in the S&P 500 during the AI buildout.

The Power & Cooling Stack

LayerProductKey VendorsAI Demand Driver
Facility powerTransformers, switchgear, PDUsEaton, ABB, Vertiv4–8× power density increase
UPSUninterruptible power supplies, batteriesVertiv (VRT), EatonHigher per-rack kVA ratings required
Rack-level powerPower distribution units, busbarsVertiv (VRT), Legrand48V bus replacing 12V for GB200
Board-level powerVRMs, power management ICsMPWR, ON, RenesasGPU TDPs rising 2–3× per generation
Power conversionAC-DC, DC-DC convertersBE, VRT, DeltaEfficiency requirement >97% at high power
Liquid coolingCDUs, cold plates, manifoldsVRT, Airedale, CoolIT120 kW/rack mandates liquid
Power semisSiC MOSFETs, GaN FETs, IGBTsON, WOLF, STMicroHigher switching frequency, lower losses

Supply Chain Players

Vertiv HoldingsVRT

The dominant AI data center power and cooling infrastructure vendor. VRT makes UPS systems, PDUs, liquid cooling distribution units, precision cooling, and data center management software. Its backlog exceeded $8B in Q1 2026 — driven by liquid cooling orders for hyperscaler AI clusters. Vertiv's direct liquid cooling (DLC) line is qualified by NVIDIA for GB200 deployments. Revenue growing 30%+ with expanding margins.

Bloom EnergyBE

Bloom makes solid oxide fuel cells (SOFCs) for on-site power generation — providing uninterruptible, grid-independent power for data centers. As utility grid interconnection timelines stretch to 5–7 years, hyperscalers are increasingly adopting on-site power generation to skip the queue. BE's order book has surged with AI data center customers and it is expanding into electrolyzer technology for green hydrogen.

Monolithic Power SystemsMPWR

MPWR makes the voltage regulator modules (VRMs) on AI GPU server boards — the chips that convert 48V rack power down to the 0.7–1.0V the GPU cores actually consume. As GPU TDPs rise from 300W (A100) to 1,000W (B200), the VRM silicon must handle more current at higher efficiency. MPWR's AI compute revenue is growing 60–80% annually and represents its highest-margin product line.

ON SemiconductorON

ON makes power management ICs and SiC MOSFETs used in AC-DC power supplies and EV chargers. Its Intelligent Power segment sells power discretes and modules into data center PSUs and industrial power systems. ON is also a major SiC wafer customer for Wolfspeed. Data center and automotive are ON's two primary growth drivers.

WolfspeedWOLF

The leading silicon carbide (SiC) wafer manufacturer. SiC enables power devices that switch faster and lose less energy as heat vs. silicon MOSFETs — critical for efficient AC-DC conversion at data center power densities. WOLF is the substrate supplier to ON, STMicro, Infineon, and others making SiC power devices. The company is building a new mega-factory in North Carolina (Siler City) for the SiC demand wave.

Metrics to Watch

  • VRT backlog and book-to-bill: VRT backlog is the best forward revenue indicator. $8B+ backlog represents 18+ months of revenue visibility.
  • Data center liquid cooling adoption rate: Air cooling → liquid cooling is the primary structural shift. When >50% of new hyperscaler racks are liquid-cooled, TAM doubles for VRT.
  • MPWR AI compute revenue growth: MPWR's VRM content per GPU server grows with TDP — rising GPU power = rising MPWR revenue per server.
  • Power utility interconnection timelines: Longer grid wait times accelerate on-site generation adoption (BE) and constrain hyperscaler rack deployments.
  • WOLF SiC wafer revenue and utilization: WOLF's capacity utilization determines whether SiC supply can meet data center demand — underutilization is the current near-term risk.
  • Hyperscaler PPA announcements: Nuclear/gas PPAs signal committed long-term capex and de-risk data center power supply constraints — bullish for VRT, BE.

Investment Signals

Bullish Triggers

  • • VRT backlog expansion or new liquid cooling win
  • • Hyperscaler announces nuclear/gas PPA
  • • MPWR AI VRM design win at new GPU platform
  • • Grid interconnection delays accelerate (BE tailwind)
  • • NVIDIA next-gen GPU TDP exceeds 1.5kW

Bearish Triggers

  • • Hyperscaler data center build delays
  • • VRT margin compression on competitive liquid cooling
  • • WOLF SiC factory ramp issues (yield/cost)
  • • AI efficiency gains reduce per-rack power demand
  • • Tariffs on cooling/power equipment imports

Related Analysis