AI Capex

analysis

Capex Signal: April 25, 2026 — Intel Roars Back, Hyperscalers Reload, and the Packaging Bottleneck Tightens

Capex Signal

25 Apr 2026 · 14 min read


This was the week the earnings flood arrived. Intel shocked everyone with a $1.2 billion revenue beat and a 24% single-day stock pop. Lam Research hit record quarterly revenue and guided the June quarter a stunning 9% above consensus. SK Hynix posted a 198% year-over-year revenue surge and told investors HBM supply is sold out for the next three years. Micron's HBM capacity is committed through 2026 under binding contracts. The semiconductor sector's Philadelphia Index logged its longest winning streak on record.

And that's before we get to the hyperscalers. Amazon, Alphabet, Meta, and Microsoft all report this coming week with a combined $690 billionin committed 2026 capex sitting behind them. The question isn't whether they'll spend. It's whether Wall Street is finally pricing the semiconductor supply chain implications correctly.

The NVIDIA–Marvell NVLink Fusion deal, formalized this week with a $2 billion equity investment, is the most underappreciated structural shift in the AI ecosystem right now. It's not a partnership. It's a toll booth.

Let me walk through everything that mattered this week.

Earnings Scorecard — Week of April 21–25

CompanyRevenueYoYvs. ConsensusKey Signal
Intel (INTC) — Q1 2026$13.6B+7%Beat by $1.24BDCAI +22% YoY; stock +24%; 3rd straight double beat
Lam Research (LRCX) — Q1 CY2026$5.84B+24%Beat; record qtrJune qtr guided $6.6B, +9% vs. consensus
SK Hynix — Q1 2026$38.2B (USD equiv.)+198%In lineHBM sold out 3 years; OP margin 72%

Next week: Amazon (April 29), Alphabet (April 29), Meta (April 30), Microsoft (April 30). AMD — May 5. NVDA — late May.

Intel Q1 2026: The Turnaround Is Real

I'll be direct: I've been skeptical of the Intel recovery narrative for two years. The foundry execution issues, the margin compression, the leadership turnover — all of it made Intel look like a story that would take half a decade to resolve. Q1 2026 made me update that view materially.

Revenue of $13.6 billion blew past the $12.32B consensus by $1.24 billion — that's not noise. That's three standard deviations above expectations. The stock closed up 24%on Friday, its best single-day move in years, and pulled the entire semiconductor sector higher — AMD up 14%, NVDA up 5%, QCOM up 8%.

SegmentQ1 RevenueYoYNote
Client Computing Group (CCG)$7.7B+1%Stable; AI PC demand building
Data Center & AI (DCAI)$5.1B+22%Primary driver; Gaudi and Xeon AI demand
Intel Foundry$5.4B+16%Backed by $8.9B federal CHIPS investment
Q2 Revenue Guidance$13.8–$14.8BAbove $13.07B est.Third straight double beat; DCAI momentum intact

The DCAI segment growing 22% YoY to $5.1 billionis the genuinely important number. Intel is capturing real AI infrastructure revenue — not just surviving on client CPU. Government tailwinds (the $8.9 billion CHIPS Act allocation plus national security CPU procurement) are meaningful, but Xeon demand from AI inference deployments is also real. The CPU is becoming the AI inference workhorse for latency-sensitive workloads that don't require GPU-class compute.

Intel Foundry's 16% growth to $5.4B reflects progress on the 18A process node ramp. The 18A tape-outs from Microsoft and others are real. The question is yields. Management's Q2 guidance of $13.8–$14.8B — above analyst expectations of $13.07B — suggests execution is tracking better than the bears had feared.

My read on Intel

Three straight quarters of double beats isn't luck; it's execution. The 24% stock move is large but not irrational given the magnitude of the beat. What I'm watching next: 18A yield data and whether Gaudi 3 is getting real hyperscaler design wins versus NVIDIA Blackwell. Intel still needs a credible GPU inference story to be a full AI data center play. The DCAI segment is promising. It's not decisive yet. But Q1 2026 is the first print in two years that makes me genuinely update my view on INTC.

Lam Research Q1 CY2026: Eight Straight, and Accelerating

LRCX reported first this week. Revenue of $5.84 billion came in 24% above the year-ago quarter and beat consensus. Eight consecutive quarters of year-over-year growth. The June quarter guidance of $6.6 billion at the midpoint was 9.4% above analyst expectations— a massive guidance beat for an equipment company. When LRCX guides that far above consensus, it means tool delivery schedules are locked and customer deposits are already in hand.

LRCX's primary exposure is etch and deposition — the tools that pattern and deposit the film stacks inside every chip. These are critical for leading-edge logic (every N3/N2 AI accelerator requires dozens of etch/dep steps) and for HBM (deep-trench capacitor formation is LRCX-intensive). The WFE cycle is in full acceleration.

Context: typical WFE upcycles last 8–10 quarters. LRCX is now at 8 straight. But this isn't a typical cycle. HBM structural demand, the advanced logic node ramp at TSMC, and the U.S. domestic fab build-out (Intel, TSMC Arizona, Samsung Texas) extend the runway well beyond historical norms. DRAM WFE is projected at $34.9 billion this year (+18% YoY). Every dollar of that flows through etch and deposition tools.

My read on LRCX

The June guidance beat is the most important number in this print. 9% above what the Street had means customer pull-in is happening and delivery slots are filled. The bears will say 8 quarters is long for an upcycle. I'm with the bulls here — the HBM WFE step-up is structural, not cyclical. LRCX is the purest WFE beta in public markets and the guidance is telling you the cycle has legs into at least 2027.

Memory: The HBM Shortage That Won't End Until 2029

SK Hynix reported Q1 2026 results with numbers that should silence any remaining HBM skeptics. Revenue surpassed 50 trillion won for the first time — approximately $38.2 billion. Operating profit hit 37.6 trillion won. Operating margin reached 72%— arguably the most profitable quarter any memory company has ever recorded. Revenue was up 198% year-over-year.

The supply picture is the critical thing to internalize: SK Hynix CEO said HBM supply is sold out for three years. Not through 2026. Three years from now. HBM prices rose approximately 50% year-over-year into Q1. SK Hynix is already qualifying HBM4E samples with mass production targeted for 2027.

Memory CompanyKey MetricHBM Supply Status
SK HynixQ1 OP margin: 72%; revenue +198% YoYSold out through 2029
Micron (MU)Q1 rev $13.6B; gross margin 56.8%All CY2026 HBM under binding contracts
Industry (HBM)HBM = 23% of DRAM wafers in 2026+70% YoY demand growth; structural shortage

HBM now consumes roughly three times the wafer capacity per gigabytecompared to DDR5. As AI demand steepens, HBM is eating into the total available DRAM wafer pool — creating a secondary shortage in conventional DRAM for consumer and enterprise. This is a structural supply inelasticity story, not a demand spike that self-corrects.

Micron is the U.S.-listed HBM pure play. Their HBM3E is shipping into NVIDIA Blackwell. Fiscal Q1 revenue of $13.6 billion, 56.8% gross margin, and $3.9 billion in free cash flow. Those are foundry-level margins on a memory company. That's what constrained supply and AI-driven demand do to industry economics.

NVIDIA–Marvell NVLink Fusion: Not a Partnership — A Toll Booth

The $2 billion NVIDIA investment in Marvell was announced in late March and formalized this week. Most coverage framed it as a “strategic partnership.” That framing undersells what actually happened.

Here's the structure: Marvell's fastest-growing business is designing custom AI accelerators (XPUs) for hyperscalers — AWS Trainium, Microsoft's AI silicon, Google TPU — that are explicitly built to reduce hyperscaler dependency on NVIDIA GPUs. And NVIDIA just invested $2 billion into Marvell and wrapped NVLink Fusion around the whole ecosystem.

NVLink Fusion allows custom XPUs — including Marvell's — to connect directly into NVIDIA's networking stack: ConnectX NICs, Bluefield DPUs, Vera CPUs, Spectrum-X switches. The strategic logic: even as hyperscalers build their own custom chips to avoid NVIDIA GPUs, NVIDIA's networking fabric becomes the connective tissue binding those chips together. You can escape the H100. You cannot escape NVLink Fusion if you want coherent multi-chip connectivity at scale.

Marvell's XPU business generated $1.5 billion in fiscal 2026 and is expected to double by FY2028. The Google co-development announcement — a Memory Processing Unit (MPU) and next-gen TPU inference chip — reported April 19, adds another confirmed XPU program. Marvell stock is up more than 50% in April alone. Stifel raised their target to $140. The custom silicon thesis is not speculative anymore — it's in the order book.

My read on NVDA–MRVL NVLink Fusion

This is the Intel Inside playbook applied to AI datacenter interconnect. The GPU is fungible — or at least, the XPU story makes it less critical. But the fabric is NVIDIA's. Every hyperscaler who deploys a custom XPU and connects it via NVLink Fusion pays NVIDIA a connectivity toll. The $2 billion investment isn't charity — it's ecosystem lock-in at the infrastructure layer. This is one of the most important strategic moves in the AI hardware industry this year, and most of the coverage missed it.

The $690 Billion Question: Hyperscalers Report Next Week

Four of the biggest AI infrastructure spenders in history report earnings in the next five trading days. The aggregate 2026 capex commitment across Alphabet, Amazon, Meta, and Microsoft is approaching $690 billion. This is unprecedented capital concentration in a single technology investment cycle.

CompanyReports2026 Capex GuideKey Watch Item
Alphabet (GOOGL)April 29$175–185BCloud >50% growth; Gemini monetization; FCF compression commentary
Amazon (AMZN)April 29$200BAWS revenue vs. $36.8B consensus; Trainium run rate ($20B+)
Meta (META)April 30$115–135BAd revenue absorbing capex; MTIA XPU ramp; Llama 4 traction
Microsoft (MSFT)April 30$120B+Azure growth rate; Copilot monetization; ROI timeline

The critical risk heading into these reports: free cash flow compression. Amazon is expected to go negative on FCF in 2026. Meta's FCF is projected to drop nearly 90% year-over-year. Microsoft is down 17% year-to-date partly on capex concerns. The question Wall Street is asking isn't whether AI demand is real — TSMC, SK Hynix, and Lam Research just confirmed it emphatically. The question is whether the ROI timeline is short enough to justify near-term FCF destruction.

I expect all four to reiterate or raise 2026 capex guidance. Jassy said existing commitments cover “a substantial portion” of Amazon's $200B spend and that most monetizes in 2027–2028. AWS AI revenue run rate is already above $15 billion, with Amazon's custom chip business (Trainium + Graviton + Nitro) at an annualized $20 billion run rate growing at triple-digit rates. Alphabet's Google Cloud is expected to exceed 50% YoY growth. These aren't speculative capex numbers — they're already generating real revenue.

The capex commentary will move these stocks more than the revenue line. Watch specifically: phasing of datacenter deployments, AI inference monetization timelines, and whether any of the four blinks on spending given macro uncertainty. I don't expect any blinks. The competitive pressure not to fall behind in AI infrastructure is too great.

Advanced Packaging: Still the Tightest Chokepoint

The packaging constraint didn't loosen this week. If anything the demand picture got tighter. TSMC's Q1 revenue of $35.9 billion with 74% of wafers from advanced nodes means the CoWoS attach rate keeps climbing. Every advanced node AI accelerator needs CoWoS packaging, and there is a hard ceiling on how fast TSMC can expand CoWoS capacity.

Current trajectory: TSMC targeting 130,000 CoWoS wafers/month by late 2026, up from 35,000 at end-2024. NVIDIA is booking over 50% of that allocation through 2027. Overflow goes to OSATs — Amkor (AMKR) is doubling capex to $2.5–3 billion in 2026, with Arizona absorbing most of it. ASE is similarly scaling.

ASML is making a push into the packaging space as well — targeting TSMC's CoWoS and SoIC packaging operations with metrology and inspection tools for yield optimization. As packaging becomes the primary value-add layer in the AI silicon stack, the equipment maker who controls yield monitoring on the packaging line has durable leverage. This is a smart adjacent expansion for ASML.

The bottleneck is structural through 2028. Advanced packaging requires specialized equipment, process development, and yield ramps that can't be accelerated with capital alone. AMKR and ASX have multi-year demand visibility at above-normal utilization. The packaging constraint is the single biggest near-term ceiling on AI accelerator production volumes.

Export Controls: The Bifurcation Deepens

The geopolitical overlay remains the most consequential risk factor that the equity market continues to partially discount. The January 2026 BIS rules put H200-equivalent and MI325X-equivalent chips on case-by-case review for China exports. A bipartisan bill in Congress would extend restrictions to DUV tools — the machines China can still legally buy from ASML.

ASML's China revenue dropped from 36% to 19% of sales in a single quarter— the fastest regional mix compression I've seen in their history. ASML management guided full-year China at ~20% of demand, down from 45% in 2023. The non-China AI capex wave is absorbing the shortfall for now. The DUV bill in Congress is the watch item — if it passes committee, that 19% compresses further.

The tariff story is nuanced. The January Section 232 tariffs (25% on certain advanced chips) were structured to exempt chips manufactured domestically to support U.S. supply chain buildout. This is a meaningful tailwind for TSMC Arizona and Intel Foundry — customers sourcing from U.S. fabs get tariff exemption, making domestic fab capacity economics incrementally more attractive.

EDA enforcement is tightening separately. Cadence paid a $95 million BIS penaltyfor export control violations. SNPS and CDNS need to watch their China exposure carefully — EDA tools are increasingly treated as dual-use technology. The enforcement risk is rising faster than either company has publicly acknowledged.

The Framework: Constraint → Intensity → Durability

Constraint: CoWoS and HBM Are the Binding Variables — Still

Every week this looks more structural, not cyclical. CoWoS capacity is the ceiling on AI accelerator production. HBM is the ceiling on memory bandwidth. Both are sold out for multiple years under binding contracts. You cannot build more AI compute than these two constraints allow. The physics of supply expansion doesn't change because the demand is urgent. TSMC can expand CoWoS at the pace they're targeting; it's still not enough. HBM fab capacity takes 18–24 months to build; the capacity for 2027 is being decided in design rooms right now.

Intensity: $690 Billion Is Not a Number — It's a System

When four companies commit $690 billion to a single infrastructure build cycle in a single fiscal year, that's not capex — it's a structural reorganization of global technology investment. Intel's DCAI growing 22% in Q1 is a downstream signal: those hyperscaler dollars are flowing into servers, CPUs, networking, and storage, not just NVIDIA GPUs. Every layer of the stack is getting funded.

Durability: Intel Beating Three Consecutive Times Is the Signal I Didn't Expect

When I think about durability, I usually point to TSMC's 5-year CAGR upgrade or ASML's backlog. But Intel beating Wall Street by $1.24 billion in Q1 2026 — with DCAI growing 22% — tells you something important: the AI infrastructure wave is broad enough to lift even a company most had written off. When the tide floats boats that were stranded, you're not at the peak of the cycle.

What to Watch Next Week

Alphabet & Amazon — April 29:Both report the same day. Alphabet: watch Google Cloud growth rate and whether it exceeds 50% YoY. If Amazon guides AWS Q2 revenue above $38 billion, that's a strong signal for the entire data center supply chain. Jassy's capex commentary will be the most-quoted line in the semiconductor world next week.

Meta & Microsoft — April 30: Meta is the bellwether on whether AI infrastructure spend is generating near-term ad revenue ROI. Microsoft needs to show Azure growth and Copilot monetization are accelerating enough to justify $120B+ in annual capex. Both results will calibrate whether the FCF compression story is already priced in.

AMD — May 5:The print I'm most uncertain about. Data Center GPU revenue grew 39% YoY to $5.38B in Q4. MI350 ramp timing and hyperscaler qualification status for MI-series will determine whether AMD is a real AI GPU competitor or a distant second. AMD was up 14% in sympathy with Intel this week — the earnings need to justify that move independently.

AMAT & KLAC — May: Applied Materials reported soft Q1 ($7.01B, -2% YoY) back in February on tool delivery timing. Q2 and H2 are when the WFE ramp shows up. KLAC is the process control bellwether as N3/N2 yields mature. Both should confirm what LRCX just signaled.

NVDA — Late May: The most anticipated print of the season. With TSMC Q1 confirming the wafer order trajectory and HBM sold out through 2026, the setup is unambiguously bullish. The $300 price target case now hinges on Vera Rubin visibility and B300 volume guidance for H2. Watch the Q1 FY2027 guide, not just the beat.

DUV export control bill — ongoing: If the bill advances through committee, ASML takes another leg down and the China fab build-out thesis gets further impaired. Watch committee markup timelines closely.

Bottom Line

Intel beat by $1.24 billion and the stock went up 24%. Lam Research guided the June quarter 9% above what Wall Street expected. SK Hynix posted 198% revenue growth with 72% operating margins and told you HBM supply is gone for three years. The Philadelphia Semiconductor Index logged its longest winning streak on record. NVIDIA invested $2 billion in Marvell to lock in NVLink Fusion as the industry's connectivity standard — turning a would-be competitor ecosystem into a toll road.

The supply chain is not cracking. It's not even wobbling. What's happening is that the $690 billion hyperscaler capex wave is so large it's creating structural supply constraints faster than the industry can build capacity — and that means pricing power for everyone from HBM memory to CoWoS packaging to etch tools to networking ASICs.

The risk isn't that demand disappears. The risk is that FCF compression at the hyperscaler level, combined with export control tightening and tariff uncertainty, creates a policy-induced speed bump that temporarily disrupts the linearity of the ramp. I don't think that changes the destination. It changes the smoothness of the path.

Stay long the constraint plays. CoWoS. HBM. Interconnect. Etch tools. The packaging bottleneck doesn't resolve until 2028. Neither does the opportunity.

— J

This research note is provided for informational purposes only and does not constitute investment advice, a solicitation, or an offer to buy or sell any security. The information contained herein is based on sources believed to be reliable but is not guaranteed as to accuracy or completeness. Past performance is not indicative of future results. AI Capex and its contributors may hold positions in securities discussed. Readers should conduct their own due diligence and consult a qualified financial adviser before making any investment decisions.