THE NUMBER: $300M — what a secret buyer just spent on AMD GPUs cooled with lab-grown diamonds. The AI ceiling isn't code. It's heat.

Here's the story thirty newsletters aren't covering today. Everyone has a take on Jack Dorsey firing 4,000 people. Nobody is covering the diamonds.

A secret buyer placed a $300 million order for AMD GPUs with lab-grown diamond cooling. Lab-grown diamonds conduct heat five times better than copper. The chips run throttle-free at higher temperatures: more sustained compute per watt, fewer GPUs per rack, lower power draw per facility. The constraint on scaling AI inference isn't the model. It's the thermal ceiling. And someone just bet $300 million that diamonds break through it.

This is your ASML story. In February, thirty newsletters covered the Anthropic Pentagon drama. Almost nobody covered ASML's EUV breakthrough that could increase chip output 50%. Same dynamic here. The diamond order won't trend on X. It won't generate hot takes. But it tells you exactly where the infrastructure bottleneck sits and who's spending real money to move it.

Every computing era was defined by a materials breakthrough, not a software one. The transistor replaced the vacuum tube. Fiber replaced copper wire. EUV lithography enabled sub-7nm chips. Each time, the people tracking the software missed the signal. The people tracking the materials made the money. The current AI race is being covered as a software competition. It isn't. It's a physics competition.

Apple is making the same argument from the other direction. The M5 Pro and M5 Max ship with 4x faster LLM processing than last year's chips, neural accelerators baked into the GPU cores, and unified memory that eliminates the CPU-GPU bottleneck throttling every other local inference setup. The 14-inch starts at $2,199. The pitch: pay once, run forever. Zero marginal cost per inference. No API keys. No per-token billing. OpenAI charges $15 per million tokens for its flagship model. Apple just told you the marginal cost of inference on their hardware is your electricity bill.

The diamond order and the M5 are the same story from opposite directions. Centralized inference scales by solving thermodynamics. Local inference scales by owning the silicon. Either way, the moat in AI is shifting from model weights to atoms. The companies spending real money on physics already know this. The quarterly earnings calls haven't caught up.

If your AI cost model assumes per-token cloud pricing indefinitely, you now have two data points telling you that assumption expires. Run the math on what your team's inference bill looks like at zero marginal cost. If the answer changes your roadmap, it should change it now.

The people tracking the materials are already moving. The benchmark debate is still running on the old treadmill.

- Harry DeMott & Anthony Batt

On the site today: The full breakdown — Apple's M5 inference strategy, the $300M diamond play, why Dorsey's layoffs are the wrong story, and the five-person team math that should change every budget conversation → getcoai.com

From the Scroll: OpenAI at $110B and $840B valuation. 62% of enterprises still in pilot phase. The Jevons Paradox is eating your productivity gains → getcoai.com/scroll

Reply

Avatar

or to participate

Keep Reading