AI Capex
Technology/Networking

Optical Networking

800G/1.6T transceivers · coherent optics · silicon photonics · DCI. The bandwidth infrastructure every AI cluster depends on.

Updated April 2026 · 7 min read


What It Is

Optical networking uses photons rather than electrons to transmit data — allowing bandwidth to scale without proportional increases in power or copper trace limitations. In AI data centers, optical connections appear at three levels: within a rack (AOC — active optical cables replacing copper at short distances), between racks and top-of-rack switches (100m to ~2km using pluggable optical transceivers), and between data centers (DCI — Data Center Interconnect, using coherent optics over 80–3,000km).

The driving metric is speed per transceiver: the industry moved 100G → 400G → 800G between 2017 and 2024, and 1.6T transceivers are entering qualification at major hyperscalers now. This generational speed doubling roughly every 2–3 years is accelerating because AI training clusters require every GPU to be able to exchange gradient data at near-HBM bandwidth — a 10–100× step up from traditional web-serving workloads.

Why It Matters for AI Capex

Training a frontier AI model requires synchronizing thousands of GPUs. Each GPU-to-GPU gradient exchange during backpropagation demands sustained 400–800 Gb/s per port — more than most backbone WAN links carried just 5 years ago. Google's TPU v5 pods, NVIDIA's GB200 NVL72 clusters, and Meta's AI Research SuperCluster are all bandwidth-constrained by the switch and transceiver layer between GPU servers.

The hyperscaler capex wave translates almost directly to transceiver unit demand. Meta's $60B 2026 AI capex, Google's $175–185B, and Microsoft's $120B+ all include massive optics and switch deployments. A single 1,000-GPU cluster can consume 2,000–4,000 800G transceivers. The transceiver market is growing at a 30–40% CAGR through 2026 — the fastest rate since the fiber buildout of the late 1990s.

Technology Landscape

SegmentSpeedDistancePrimary Use
Datacom pluggable (800G)800 Gb/s100m–2kmAI cluster intra-DC, spine-leaf switching
Datacom pluggable (1.6T)1.6 Tb/s100m–500mNext-gen AI clusters, sampling 2025
Coherent (ZR/ZR+)400G–800G/λ80km–3,000kmHyperscaler DCI, distributed training across campuses
Silicon photonics400G–3.2TOn-package → 2kmCo-packaged optics (CPO), next-gen switches
Co-packaged optics (CPO)12.8–51.2 Tb/s aggregateOn-boardFuture 51.2T AI switches eliminating pluggables entirely

Supply Chain Players

Coherent Corp.COHR

The largest optical component and transceiver vendor after absorbing Finisar and II-VI. COHR makes transceivers at all speeds (400G, 800G, 1.6T sampling), coherent modules, and the vertical-cavity surface-emitting lasers (VCSELs) inside most datacenter optics. AI datacom is now COHR's fastest-growing segment, offsetting telecom headwinds.

LumentumLITE

LITE is a pure-play photonics company: lasers, modulators, amplifiers, and reconfigurable optical add-drop multiplexers (ROADMs). Historically telecom-heavy, but its Cloud & Networking segment (hyperscaler 3D sensing and datacenter lasers) is surging. LITE is a key supplier for 800G laser components to multiple transceiver assemblers.

CienaCIEN

Ciena makes WaveLogic coherent optical line systems and switching platforms. The company is the primary DCI vendor for hyperscalers interconnecting geographically distributed data centers. As AI training moves to multi-campus distributed clusters, Ciena's WaveLogic 6 Extreme (operating at 800G per wavelength) is a critical enabler.

Marvell TechnologyMRVL

MRVL makes the digital signal processors (DSPs) and PAM4 SerDes that enable high-speed optical links. Without MRVL's Orion/Spica DSPs, 800G optical transceivers cannot achieve the signal integrity needed for AI cluster networks. MRVL also makes custom networking ASICs for cloud DCI platforms.

Arista NetworksANET

Arista provides the Ethernet switches into which 800G transceivers plug. ANET's 7800R series and its AI Ethernet Consortium-compliant platforms are the primary alternative to NVIDIA's InfiniBand fabric for AI clusters. Meta and Google use large-scale Arista deployments — ANET is the swing factor in the InfiniBand vs. Ethernet debate.

Metrics to Watch

  • 800G transceiver shipments (units): The primary revenue driver for COHR and LITE. Growing 3–4× per year through 2026.
  • COHR datacom revenue %: Datacom (AI) vs. telecom mix shift is the primary valuation driver — telecom is headwind, datacom is tailwind.
  • MRVL cloud & carrier segment bookings: MRVL's optical DSP revenue leads transceiver revenue by 1–2 quarters.
  • Coherent DCI bookings at CIEN: Hyperscaler DCI wins are lumpy but indicate distributed AI training architecture adoption.
  • Co-packaged optics (CPO) timeline: CPO would eliminate pluggable transceivers — a structural risk to COHR and LITE long-term. Timeline slipping is bullish for plug vendors.
  • InfiniBand vs. Ethernet share in AI clusters: Rising Ethernet share benefits ANET and MRVL over NVIDIA's InfiniBand monopoly.

Investment Signals

Bullish Triggers

  • • Hyperscaler 800G/1.6T transceiver purchase orders
  • • CPO timeline delays (extends pluggable revenue)
  • • Meta/Google AI Ethernet Consortium expansions
  • • CIEN DCI contract wins at new hyperscaler campuses
  • • COHR datacom revenue overtaking telecom

Bearish Triggers

  • • CPO acceleration (structural headwind to pluggables)
  • • Hyperscaler capex deferrals (cuts transceiver orders)
  • • NVIDIA InfiniBand market share gains over Ethernet
  • • COHR/LITE margin compression on 800G pricing
  • • Telecom inventory destocking extending

Related Analysis