THE NUMBER: 67% — the collapse in entry-level tech hiring since AI tools went mainstream. That's not a labor market correction. That's an industry eating its seed corn.
This week, AI crossed a line that most people won't recognize until it's behind them. Karpathy open-sourced a tool that lets AI agents run 100 ML experiments overnight — no human in the loop. Shopify's CEO adapted it and got a 19% improvement while he slept. Anthropic shipped a system where AI agents review AI-generated code at a <1% false positive rate. And Claude ran autonomous research on its own interpretability — AI improving AI's ability to understand itself.
We've seen this exact pattern once before, and it changed the world. When the first machine tool could cut the parts to build another machine tool, the Industrial Revolution became inevitable. The lathe that builds the lathe. Every cycle faster than the last. Every output feeding the next input. That's where we are.
But here's what makes this moment different from every previous automation wave: the improvement loop depends on human experts to reject the 17-30% of AI output that's wrong. Every correction — "no, not like that" — creates a constraint that didn't exist before. The output is disposable. The rejection is the asset. And right now, that asset evaporates after every conversation because nobody is capturing it.
Meanwhile, the organizations being built around these self-improving systems look nothing like the ones they're replacing. Tunguz published the math this week: a 150-person company has 11,175 communication channels. A 30-person AI-augmented team producing equivalent output has 435. Anthropic generates $5M in revenue per employee. Traditional SaaS considers $300K strong. That's not a productivity improvement. That's a different species of company. And the species is evolving — Amazon cut 16,000 middle managers this quarter because agentic workflows made them redundant. Jeff Dean predicts engineers will manage 50+ agents each. The question isn't "how many people can one manager oversee?" It's "how many agents can one human orchestrate?"
And in Paris, Yann LeCun raised $1.03 billion — Europe's largest seed round ever — to build world models that he says will make the entire LLM paradigm obsolete. The backers tell the story: Nvidia, Toyota, Samsung, Bezos. Hardware companies. Physical-world operators. Companies that need AI that understands atoms, not tokens. If the future is specialized expert agents running in autonomous teams — Cialdini's persuasion principles encoded in one, game theory in another, visual perception in a third — does it matter whether they're built on LLMs or world models? The architectural question underneath the funding headlines: are we building one brain, or building a team?
One question ties all three stories together. When the self-improving models have compounded through millions of correction cycles, and the expert agents know more about your domain than any single human, and entry-level hiring has eliminated the pipeline that produces the next generation of expert rejectors — who checks the checker?
Build the answer now. While there's still someone around who can.
On the site today: The full three-act analysis — self-improving AI, the mutating org chart, and the billion-dollar paradigm fork — plus three questions we think every board should be asking → getcoai.com
From the Scroll: Nate Jones on why your rejections are more valuable than your prompts → Substack |
Bassim Eledath's 8 Levels of Agentic Engineering → Blog
Tunguz on the org chart math → Blog
— Harry and Anthony