AI Capex
Reference Library

AI Capex Glossary

59 terms across semiconductor manufacturing, equipment, chips, memory, networking, software, and investing. Engineer-grade definitions — no dumbing down, no marketing language.

59+
Definitions
10
Categories
19
Letters
BusinessChipsEquipmentFinancialsInvestingManufacturingMarketMemoryNetworkingSoftware

Click any letter to expand that section.

A5 terms

ALD (Atomic Layer Deposition)

Equipment

A deposition technique that adds material one atomic layer at a time by alternating two self-limiting chemical reactions. Each cycle adds ~0.1nm. Essential for ultra-thin conformal films (HfO₂ gate dielectric, RuO₂ local interconnect) at advanced nodes where CVD cannot achieve required uniformity. Lam Research and AMAT dominate ALD tools.

ALE (Atomic Layer Etch)

Equipment

The etch counterpart to ALD — removes material one atomic layer per self-limiting cycle. Critical for gate-all-around (GAA) nanosheet transistors at N2 and below, where angstrom-level sidewall control is required. Lam Research leads in ALE for logic and memory.

ASIC (Application-Specific Integrated Circuit)

Chips

A chip designed for a single purpose. Google's TPUs, Meta's MTIA, Amazon's Trainium/Inferentia are AI ASICs. Lower power and cost per inference than general-purpose GPUs, but require $500M–$2B+ NRE and 3–5 year design cycles. TSMC or Samsung manufacture them for fabless AI companies.

ASP (Average Selling Price)

Financials

Average revenue per unit shipped. Rising ASP on DRAM or HBM = supply-constrained pricing power. NVIDIA's data center GPU ASP reached ~$35,000 for H100 SXM5. HBM3e sells at $15–20/GB vs. ~$3–5/GB for standard DRAM.

Advanced Packaging

Manufacturing

Techniques for integrating multiple chiplets into a single package at high interconnect density. Includes 2.5D (CoWoS, EMIB), 3D stacking (SoIC, HBM), and direct bond interconnect (DBI). Critical for AI GPUs — NVIDIA B200 uses CoWoS-L to integrate GPU die with 8 HBM3e stacks.

B3 terms

B200 (Blackwell)

Chips

NVIDIA's AI GPU released 2025. 208B transistors on TSMC 4NP, 192GB HBM3e at 8 TB/s bandwidth, 9 PFLOPS FP8, 1000W TDP. Packaged with CoWoS-L. The GB200 NVL72 rack combines 72 B200 GPUs and draws 120kW. Supply constrained by CoWoS-L capacity since launch.

BSPDN (Backside Power Delivery Network)

Manufacturing

Routing VDD/VSS to transistors from the wafer backside rather than through front-side metal. Reduces IR drop and frees front-side wiring for signal routing. Enables 10–15% performance improvement. TSMC plans BSPDN for N2P and beyond. Requires new selective deposition tools (AMAT advantage).

Book-to-Bill (B2B)

Financials

Orders received ÷ orders shipped for semiconductor equipment. >1.0 signals expanding demand. Published monthly by SEMI for North American equipment. A reliable leading indicator for WFE revenue 2–3 quarters forward.

C6 terms

Capex (Capital Expenditure)

Financials

Spending on physical assets with multi-year useful life — buildings, servers, chips, networking equipment. Hyperscalers' combined 2026 AI capex exceeds $300B, the largest peacetime industrial investment in history.

CD-SEM (Critical Dimension Scanning Electron Microscope)

Equipment

Measures circuit feature widths at nanometer scale on production wafers. Essential for verifying lithography accuracy. KLA's Puma series dominates. At N2, feature widths of 5–8nm must be controlled to ±0.5nm tolerance.

CMP (Chemical Mechanical Planarization)

Equipment

Polishes wafer surfaces using chemical slurry and mechanical abrasion to achieve nanometer-level flatness between each metal interconnect layer. A chip at N3 has 16 metal layers requiring 30+ CMP steps. AMAT leads CMP tools; Entegris supplies slurries.

CoWoS (Chip-on-Wafer-on-Substrate)

Manufacturing

TSMC's 2.5D advanced packaging integrating GPU dies and HBM stacks on a silicon interposer. CoWoS-S uses a standard interposer. CoWoS-L uses a larger reticle-stitched interposer for 8+ HBM stacks. The primary bottleneck for AI GPU supply — TSMC tripling CoWoS capacity through 2026.

CUDA

Software

NVIDIA's parallel computing platform released 2006. The primary reason NVIDIA dominates AI training. A 15-year ecosystem of libraries (cuBLAS, cuDNN, NCCL) and tools creates switching costs hardware specs alone cannot overcome. Developer count exceeds 4 million.

CVD (Chemical Vapor Deposition)

Equipment

Deposits thin films from gaseous precursors reacting on the wafer surface. Used for dielectrics (SiO₂, Si₃N₄), metals (W, TiN), and semiconductors. AMAT and Lam Research dominate CVD tools. PECVD and SACVD are common variants.

D4 terms

Die

Manufacturing

A single chip cut from a wafer after fabrication. Die area, yield rate, and wafer cost determine manufacturing economics. Larger die = more transistors but lower yield — the fundamental tension in chip design economics.

DPT (Double Patterning)

Manufacturing

Lithography technique using two exposures to print features smaller than a single EUV or DUV exposure resolves. Required for dense layers at 7nm and below. EUV single-patterning eliminates DPT overhead — a key reason EUV adoption is economically driven.

DRAM (Dynamic Random-Access Memory)

Memory

Volatile memory requiring constant electrical refresh to retain data. Primary system memory in servers and PCs. AI training servers require 1–3TB of DRAM per rack. Samsung, SK Hynix, and Micron hold the oligopoly. DRAM cycles are supply-driven by these three players' capacity additions.

DUV (Deep Ultraviolet)

Equipment

Lithography using 193nm (ArF) or 248nm (KrF) wavelength light. Requires multi-patterning for sub-10nm features. ASML dominates DUV market (>80% share) and still ships more DUV than EUV in unit terms — mature node fabs use DUV exclusively.

E3 terms

EDA (Electronic Design Automation)

Software

Software tools for chip design and verification. The flow runs from RTL (Verilog/SystemVerilog) through synthesis, place-and-route, signoff, and tapeout. Synopsys and Cadence hold an effective duopoly. Without EDA, no chip can be designed at modern transistor counts.

Etch

Equipment

Removes specific materials from wafer surfaces with extreme directional and chemical selectivity. Plasma etch uses reactive ions; wet etch uses liquid chemicals. Lam Research and Tokyo Electron lead. GAA at N2 requires new ALE chemistries for nanosheet release.

EUV (Extreme Ultraviolet Lithography)

Equipment

Lithography using 13.5nm wavelength light generated by a tin plasma struck with a CO₂ laser at 50,000 shots/second. ASML has a complete monopoly. Required for volume production at 7nm and below. Low-NA: €200M/unit; High-NA: €350M+/unit. ~60 Low-NA systems ship per year globally.

F4 terms

Fabless

Business

Semiconductor company that designs chips without owning manufacturing facilities. NVIDIA, AMD, Qualcomm, Apple, Broadcom, Marvell are fabless. They outsource manufacturing to foundries. The fabless model enables R&D focus without the $20B+ cost of a leading-edge fab.

FinFET

Manufacturing

Fin Field-Effect Transistor — dominant transistor architecture from 22nm to 3nm. The gate wraps around 3 sides of a vertical fin for better electrostatic control vs. planar transistors. Being replaced by Gate-All-Around (GAA) at TSMC's N2 and Samsung's 3nm.

Foundry

Business

Contract semiconductor manufacturer. TSMC makes ~90% of the world's advanced logic chips for NVIDIA, Apple, AMD, Qualcomm, and 500+ others. The foundry model (Morris Chang, 1987) separated chip design from manufacturing and enabled the fabless industry.

FSDP (Fully Sharded Data Parallelism)

Software

Distributed training technique that shards model parameters, gradients, and optimizer states across all GPUs, reducing memory per GPU while maintaining training throughput. Enables training models larger than a single GPU's memory. Core to PyTorch Distributed at large scale.

G2 terms

GAA (Gate-All-Around)

Manufacturing

Transistor architecture where the gate electrode surrounds the channel on all 4 sides (vs. FinFET's 3). Provides superior electrostatic control, reducing leakage and enabling continued scaling. Samsung uses MBCFET (GAA) at 3nm. TSMC introduces nanosheet GAA at N2 (2025). Requires new ALD and ALE processes.

GPU (Graphics Processing Unit)

Chips

A processor with thousands of smaller cores optimized for parallel computation. Originally for graphics; now dominant for AI training. NVIDIA holds ~80–90% of AI/HPC GPU market share. Key advantage: massive parallelism (10,000+ cores) and HBM bandwidth for matrix operations at scale.

H5 terms

H100 (Hopper)

Chips

NVIDIA's flagship AI GPU 2022–24. 80B transistors on TSMC 4nm, 80GB HBM3 at 3.35TB/s, 3.9 PFLOPS FP8, 700W TDP (SXM5). The chip that defined the 2023–24 AI buildout. Waitlists exceeded 12 months at peak demand. Succeeded by H200 (same die, 141GB HBM3e) then the full-generation B200.

HBM (High Bandwidth Memory)

Memory

Memory architecture stacking multiple DRAM dies vertically using Through-Silicon Vias (TSVs), placed adjacent to the GPU die on a silicon interposer. Provides 5–10× GDDR bandwidth at 30–50% lower power per bit. SK Hynix leads supply. Every AI training GPU requires HBM.

HBM3e

Memory

Current-generation HBM (2024–25). 36GB per stack, 1.15TB/s per stack bandwidth. H200: 6 stacks = 141GB at 6.9TB/s. B200: 8 stacks = 192GB at 8–9 TB/s. SK Hynix, Micron, and Samsung all produce HBM3e; SK Hynix maintains yield and production lead.

HBM4

Memory

Next-generation HBM (sampling 2025, production 2026). 16-layer DRAM stack, ~1.2TB/s per stack. Designed for NVIDIA's Rubin GPU (Blackwell successor). Logic die integration (processing-in-memory) is a new HBM4 capability. Requires CoWoS-L packaging.

High-NA EUV

Equipment

Next-generation EUV with 0.55 numerical aperture (vs. 0.33 for Low-NA). Prints ~30% smaller features per single exposure. Required for N2+ and A14 to avoid expensive multi-patterning. ASML ships at €350M+/unit. Intel Foundry received first system in 2024; TSMC ordering for 2026.

I3 terms

InfiniBand

Networking

High-performance interconnect for GPU-to-GPU communication across servers. NDR InfiniBand: 400Gb/s per port, <1μs latency. NVIDIA acquired Mellanox (2020) and dominates the market. Required for large-scale LLM training where thousands of GPUs synchronize gradients every few hundred milliseconds.

Interposer

Manufacturing

A layer of silicon between chiplets and the package substrate, providing dense microbump interconnect at 10–55μm pitch. TSMC's CoWoS uses a silicon interposer to connect GPU dies and HBM stacks, routing thousands of data lanes at 200–400Gbps per lane.

Ion Implantation

Equipment

Implants dopant atoms (Boron, Phosphorus, Arsenic) into silicon at precise energies and doses to create transistor junctions and tune threshold voltage. Sub-1keV implants for FinFET/GAA extensions require angstrom-level junction depth control. AMAT and Axcelis are primary vendors.

K1 term

KLAC (KLA Corporation)

Equipment

Dominant supplier of process control and metrology equipment: defect inspection (Puma, eSL20), overlay metrology (Archer series), CD-SEM, film measurement (ASET-F). ~50% market share in wafer inspection. Every advanced fab is a KLA customer. Revenue scales with inspection intensity at each new node.

L3 terms

LLM (Large Language Model)

Software

A neural network trained on vast text corpora to generate text. GPT-4, Claude, Gemini, and LLaMA are LLMs. Training requires 10,000–50,000+ GPUs running for months. Each parameter update requires reading and writing model weights to HBM at full bandwidth. The primary demand driver for AI GPUs.

Lithography

Equipment

Printing circuit patterns onto silicon wafers using light projected through a mask (reticle). The most critical and expensive semiconductor manufacturing step. ASML monopolizes EUV; they also produce DUV (193nm immersion) used for mature-node patterning.

LRCX (Lam Research)

Equipment

Second-largest semiconductor equipment vendor. Leads in plasma etch, ALD (ALTUS), and clean. Dominant in DRAM manufacturing — virtually every DRAM chip requires dozens of Lam etch steps. HBM manufacturing is Lam-intensive due to TSV etch requirements. ~30% China revenue exposure.

M3 terms

MTr/mm² (Transistor Density)

Manufacturing

Millions of transistors per square millimeter. The real measure of process node advancement. TSMC N5: 171 MTr/mm². N3E: ~200 MTr/mm². N2: ~215 MTr/mm². More density = more compute per watt per dollar — the semiconductor industry's core value proposition.

Memory (DRAM/NAND/HBM)

Memory

Three main memory technologies: DRAM (fast volatile, $3–5/GB), NAND Flash (non-volatile storage, <$0.10/GB), HBM (ultra-high bandwidth AI memory, ~$15–20/GB). Each has different manufacturing, economics, and cycle dynamics. Memory is supply-driven and cyclical; logic is innovation-driven.

Multi-Die / Chiplets

Manufacturing

Designing chips as multiple smaller dies assembled in an advanced package. AMD EPYC, Intel Meteor Lake, and custom AI ASICs use chiplets. Benefits: higher yield per die, mix of process nodes, faster design cycles. CoWoS and SoIC are the primary integration vehicles at leading-edge.

N4 terms

N2 (TSMC 2nm)

Manufacturing

TSMC's first Gate-All-Around (nanosheet) process, entering production 2025. ~215 MTr/mm². ~15% better performance or ~25% lower power at iso-performance vs. N3E. Apple A16 (iPhone 17) is the expected first volume product. Next-gen AI accelerators will migrate to N2 in 2025–26.

NAND Flash

Memory

Non-volatile memory storing charge in floating gate or charge-trap cells. Used in SSDs and data center storage. 3D NAND has reached 200+ vertical layers. Not used in AI training compute (too slow for random access), but stores datasets, checkpoints, and model weights at scale.

NRE (Non-Recurring Engineering)

Financials

Fixed up-front design and mask costs for a chip. At N3/N2, NRE runs $50–200M (mask sets alone $15–30M). NRE amortizes over production volume — reason fabless companies only build custom ASICs for very high-volume or very high-value applications.

NVLink

Networking

NVIDIA's proprietary GPU-to-GPU interconnect within a server node. NVLink 4.0 (Hopper): 900GB/s bidirectional per GPU. NVSwitch 3.0 provides full all-to-all connectivity across 8 GPUs at 57.6Tb/s aggregate. Essential for model-parallel training of LLMs where tensors must be split across multiple GPUs.

P2 terms

PDK (Process Design Kit)

Manufacturing

Foundry-provided software library describing electrical behavior and physical design rules for each process layer. Chip designers use the PDK to verify designs before manufacturing. The interface between chip design tools and fab manufacturing. Foundries keep PDKs strictly confidential.

Process Control / Metrology

Equipment

Inspection and measurement equipment monitoring wafer quality after every critical step. At N2, a fab runs KLA scans after lithography, etch, CMP, and deposition — potentially 100+ inspection steps per wafer. Without process control, yield collapses. KLA dominates (~50% market share).

R1 term

Reticle (Photomask)

Equipment

Quartz plate with chrome patterns defining one chip layout layer. Projected onto wafers by the lithography scanner. Standard reticle: ~33×26mm (858mm² exposure field). CoWoS-L uses reticle stitching for interposers larger than one reticle field. N2 mask sets cost $15–30M and take 8–12 weeks to make.

S3 terms

SoIC (System on Integrated Chips)

Manufacturing

TSMC's 3D stacking technology bonding two logic dies face-to-face at <10μm bump pitch. Provides die-to-die bandwidth orders of magnitude higher than CoWoS interposer connections. Used where two logic blocks need massive bandwidth between them — next-generation chiplet integration.

Sovereign AI

Market

Government-owned or government-funded AI compute infrastructure. UAE, Saudi Arabia, Japan, France, India, and other governments have committed to national GPU clusters. Sovereign AI is a new NVDA demand category outside the hyperscaler model — immune to normal ROI scrutiny, representing $30–50B+ incremental demand.

Supercycle

Market

An extended period of above-trend semiconductor capex driven by a structural demand shift rather than a normal inventory cycle. The AI supercycle (2023–present) is driven by hyperscaler AI infrastructure buildout — multi-year commitments, relatively inelastic to short-term revenue fluctuations.

T3 terms

Tapeout

Manufacturing

Final chip design step: sending the complete physical layout (GDS-II file) to the foundry for mask generation. After tapeout, first silicon arrives 2–3 months later. Any bug found post-tapeout requires a full re-spin at $15–30M mask cost — motivating extreme pre-tapeout verification investment.

TSV (Through-Silicon Via)

Manufacturing

Vertical electrical connections drilled through a silicon die, enabling die stacking. HBM uses thousands of TSVs (1–5μm diameter) to connect DRAM layers. TSV density determines bandwidth per stack. Lam Research is the dominant supplier of DRIE (deep reactive ion etch) tools used to form TSVs.

TSMC (Taiwan Semiconductor Manufacturing)

Business

World's largest contract chip manufacturer, founded by Morris Chang in 1987. Makes ~90% of the world's advanced logic chips for NVIDIA, Apple, AMD, Qualcomm, and 500+ others. Also operates CoWoS advanced packaging lines, making TSMC the critical node for both chip production and AI GPU assembly.

W3 terms

WFE (Wafer Fabrication Equipment)

Financials

Total annual global spend on semiconductor manufacturing equipment. ~$100B+/year industry. Dominated by ASML (~25%), AMAT (~20%), Lam (~15%), TEL (~15%), KLA (~10%). WFE spend is the most accurate leading indicator for chip supply 12–18 months forward.

Wafer

Manufacturing

Thin circular silicon slice on which chips are fabricated. Standard sizes: 200mm (mature nodes) and 300mm (advanced logic and memory). 300mm provides 2.25× the area of 200mm — a key cost reduction lever when migrating. Surface flatness must be <0.1nm RMS for EUV lithography.

Wheel Strategy

Investing

An options income strategy combining cash-secured puts (collect premium, potentially acquire stock cheaper) with covered calls (collect premium on existing shares, potentially sell higher). Used on AI capex stocks with high implied volatility to generate income while maintaining long exposure.

Y1 term

Yield

Manufacturing

Percentage of dies on a wafer that pass all electrical tests. New nodes start below 50% and improve over 12–24 months. At mature nodes, yields reach 90%+. A 10% yield improvement reduces effective cost per chip by ~11% — the biggest lever in semiconductor manufacturing economics.