Ben Pouladian's Blog
AI and Investing Is the Future!
recent posts
- 2.7x on the Same Iron: MLPerf v6.0 Just Proved Nvidia’s Extreme Co-Design Thesis
- The Orchestration Layer Is the New Platform War — NVIDIA’s AI Agent Strategy | GTC 2026 (Podcast)
- TurboQuant and Dynamo: Why the Market Has the Memory Thesis Backwards
- The Agentic CPU: Arm’s First Silicon Shot at x86
- The Token Explosion: Why GTC 2026 Was Really About Building the World’s Largest Token Factory
Their guidance was $335–$345 million. Wall Street consensus before that had been $248 million. They beat their own elevated guide by 18%. Sequential growth from Q2’s $268 million: 51%. In one quarter.
Yes, I’m writing about earnings preannouncements the morning after the Super Bowl. That should tell you how significant this is.
This dropped four days after Amazon announced $200 billion in 2026 capex — the largest single-year corporate capital expenditure in history, predominantly flowing into AWS and AI infrastructure. When the biggest hyperscaler on earth tells you it’s spending 53% more than last year, and your connectivity supplier simultaneously crushes estimates by 18%, you’re not looking at a coincidence. You’re looking at a demand signal.
And it maps directly to the thesis I laid out in my AI Datacenter Optical Interconnect Boom deep dive — with one important addition that makes the story even better than I originally framed it.
The Nuance Everyone Is Missing: Copper Isn’t Dead
The lazy narrative is “copper-to-optical transition.” The real story is more interesting.
Credo’s ZeroFlap AECs are actually displacing optics at short distances — not the other way around. One hyperscaler came to Credo because they were losing 20–30% of their uptime fighting optical link flaps. Credo’s AECs deliver up to 1,000x better reliability than laser-based optical modules at half the power, saving up to $1,000 per GPU compared to traditional optics. AECs are now replacing optical connections at distances up to 7 meters and have become the de facto standard for inter-rack and rack-to-rack AI cluster connectivity.
So what’s actually happening is copper is eating optics from below while optics eats copper from above. AECs are pushing the copper frontier outward with superior reliability and lower power. Meanwhile, co-packaged optics and silicon photonics are the only solution for longer distances beyond that range — the physics I detailed using the Qualcomm FOM framework.
Credo wins both sides of this trade. AECs today. Active Light Cables tomorrow. That’s the real thesis.
The AEC Ramp We Called
In December, I detailed how Credo’s HiWire AECs — where they hold 88% market share — are the current revenue engine. I cited JPMorgan’s $4 billion AEC TAM by 2028. At this trajectory, those numbers look conservative.
The math is multiplicative. GPU-to-GPU connections don’t scale linearly with cluster size. A pod-level configuration needs a handful of interconnects. Row-level needs orders of magnitude more. When Amazon alone is deploying $200 billion worth of infrastructure — with AWS growing 24%, its fastest in 13 quarters — and Alphabet is layering on another $175–$185 billion, the interconnect demand curve goes parabolic. Every rack expansion multiplies Credo’s content per data center.
The full-year trajectory tells it. Q1: $223M. Q2: $268M. Q3: ~$406M. That’s 82% acceleration within a single fiscal year, with Credo now confirming more than 200% year-over-year growth for FY2026.
The Q4 Guide Deserves a Closer Look
Credo guided Q4 for mid-single digit sequential growth — roughly $426–$446 million. After a 51% Q3 sequential jump, the deceleration looks jarring. The bears will call this a demand cliff.
Three possible explanations, and they’re not mutually exclusive:
Sandbagging. They just guided Q3 at $335–$345M and delivered $404–$408M. Their track record of under-promising is well established. Mid-single digit could easily become another double-digit beat.
Hyperscaler order lumpiness. AEC deployments come in large cluster-level orders, not smooth weekly shipments. If major build-outs hit deployment milestones in Q3 — pulling demand forward — Q4 sees a natural digestion period before the next wave. This is normal for infrastructure builds at this scale.
Account-level dynamics. Their largest customer was 86% of Q2 revenue. On the Q2 call, the CFO said that customer might stay at similar absolute dollars while the percentage drops — meaning other customers grow faster. If the primary hyperscaler’s current AEC deployment phase is completing at specific sites, Q4 reflects that account-level plateau even as customers four and five ramp. Credo has three hyperscalers in volume production and two more in qualification. The diversification story is the bridge.
This is the question for the March 2 earnings call: is the Q4 moderation a function of deployment timing at their lead customer, or is it something structural? The answer determines whether this is a pause before the next leg up or a maturation signal.
Blue Heron: Copper Pushes Into Scale-Up
Here’s what most investors haven’t connected yet. Two weeks ago, Credo announced Blue Heron — the industry’s first 224G multiprotocol AI scale-up retimer supporting UALink, ESUN, and Ethernet. Built on 3nm. Sampling now, production in CQ3 2026.
Why this matters: my optical piece identified scale-up connectivity — GPU-to-GPU via NVLink and NVSwitch — as the domain where optical’s TAM expands dramatically with Rubin Ultra in 2027. Blue Heron extends copper’s relevance in scale-up right now. Previously, 224G scale-up cable backplanes were limited to half-rack spans. Blue Heron enables full recovery of a 40+dB 224G link across rack-scale distances.
This means Credo is planting a flag in the scale-up domain before optical scale-up is ready — buying the industry time and capturing revenue in the gap. It’s the same pattern: extend copper’s useful life while positioning for the optical transition. And critically, the UALink Board Chair (AMD’s Director of System Architecture) publicly endorsed it, calling Blue Heron “a crucial building block for next-generation AI and compute infrastructure.”
Read-Through Across the Stack
If Credo’s connectivity demand is running this hot, every name in my tiered investment framework benefits. Lumentum and Coherent (Tier 1) should be seeing similar demand in transceivers and laser components. Astera Labs (Tier 2) benefits from the same cluster expansion — every GPU added needs PCIe retimers. And it validates why Marvell paid $3.25 billion for Celestial AI.
Credo’s own TAM story has expanded dramatically. Management now frames five growth pillars — AECs, IC solutions (retimers and optical DSPs), ZeroFlap Optics, Active Light Cables, and OmniConnect gearboxes — collectively targeting more than $10 billion in addressable market, more than tripling their reach from just 18 months ago.
So What?
Copper is getting smarter, not dying. Credo’s AEC dominance proves that active copper solutions are displacing optics at short distances while optical wins at longer ranges. The company straddles both transitions.
Hyperscaler capex validates the demand. Amazon’s $200B, Alphabet’s $175–185B, and the broader infrastructure build-out create multiplicative connectivity demand that even Credo’s own visibility underestimated by 18%.
The Q4 guide is a question, not an answer. Whether this reflects sandbagging, order lumpiness, or account-level timing will define the next 12 months of the stock. March 2 is the date.
Blue Heron opens a new front. Scale-up connectivity is the next TAM expansion, and Credo just planted its flag there before optical is ready.
The physics hasn’t changed. Copper still can’t keep up at distance. But at short range, it’s winning — and Credo owns both sides of the transition. The revenue is proving it faster and bigger than even the companies building the infrastructure predicted.
If you found this analysis valuable, please share it—it helps more than you know. And if you haven’t subscribed yet, now’s the time. BEP Research will be moving to paid soon in the coming weeks, watch out for a special sign up offer in your inbox. I’m committed to delivering institutional-quality analysis on AI infrastructure that you won’t find anywhere else.
For the full technology stack analysis, Qualcomm FOM framework, and tiered investment framework: The AI Datacenter Optical Interconnect Boom
About the Author
Ben Pouladian is a Los Angeles-based tech investor and entrepreneur focused on AI infrastructure, semiconductors, and the power systems enabling the next generation of compute. He was co-founder of Deco Lighting (2005–2019), where he helped build one of the leading commercial LED lighting manufacturers in North America. Ben holds an electrical engineering degree from UC San Diego, where he worked in Professor Fainman’s ultrafast nanoscale optics lab on silicon photonics and micro-ring resonators, and interned at Cymer, the company that manufactures the EUV light sources for ASML’s lithography systems.
He currently serves as Chairman of the Leadership Board at Terasaki Institute for Biomedical Innovation and is a YPO member. His investment research focuses on AI datacenter infrastructure, GPU computing, and the semiconductor supply chain. Long-term NVIDIA investor since 2016.
Follow on Twitter/X: @benitoz | More at benpouladian.com
Disclosure: The author holds positions in NVDA, CRDO, LITE, and ALAB. This is not investment advice.




Leave a Reply