THE NUMBER: 10x (and 0) — the efficiency gain of NVIDIA's next-gen Vera Rubin chip over current hardware, and the book value of every GPU it replaces.
Last night NVIDIA posted $68.1 billion in revenue, beat on every line, and guided Q1 to $78 billion against a Street expecting $73 billion. Jensen Huang declared "the agentic AI inflection point has arrived" and coined a new phrase: "Compute equals revenues." Every newsletter this morning will lead with the beat. They'll miss the story.
The story is Vera Rubin. Samples shipped to customers this week. 5x the inference performance of Blackwell. 10x lower cost per token. Ships H2 2026. And here's the question nobody on the earnings call asked: when the replacement chip is 10x more efficient per watt, what is the economic useful life of the chip it replaces?
Michael Burry has been making this argument for a year. Google and Microsoft extended GPU depreciation from 3 years to 6. Meta went from 3 to 5.5. Burry estimates this understates depreciation by $176 billion through 2028. NVIDIA says the old A100s still run at full utilization. Fine. "Still running" and "economically competitive" are different things. An A100 at full tilt burns 10x more power per token than Vera Rubin will. When energy is the binding constraint, old chips aren't assets. They're liabilities with a power bill.
I've watched this pattern repeat across four decades of technology shifts. Every time a new platform arrives, it doesn't just win. It retroactively destroys the value of whatever came before it. Containerization didn't just move goods cheaper. It made every break-bulk port a stranded asset. The smartphone didn't just add a screen. It zeroed out the value of every dedicated GPS, camera, and MP3 player. The new thing doesn't compete with the old thing. It obsoletes it.
That's what's happening now, and not just in silicon. This same week, Claude Code demonstrated it can read COBOL, and IBM lost $31 billion in market cap in a single session. The stock market is a discounted cash flow machine. IBM's price assumed nobody else could maintain that code. The moment an AI can read it, the lock-in premium evaporates. You don't need full modernization. You just need the option to leave. And options reprice everything.
Hardware. Software. And now process knowledge itself. Guidde just raised $50 million to train AI agents by watching expert video. Not reading docs. Watching. Meanwhile, 16 million stolen Claude conversations were used to train competing Chinese models. The moat you built on "it takes years to train people to do this" is a sandcastle at high tide.
Depreciation is compressing across the entire stack: silicon, software, institutional knowledge. The accounting hasn't caught up. The market is starting to. If you're allocating capital anywhere near technology right now, the only question that matters is: what's the real useful life of what I own?
Everything depreciates faster now. The only question is whether you see it first or get the write-down.
On the site today: The full analysis, including the Burry thesis applied to software valuations, Claude's dominance in the LLM Skirmish coding benchmark, and what visual imitation learning means for every company whose moat is process complexity → getcoai.com