We are living through the biggest technological transformation in modern history. The printing press, the railroad, the internet — pick your analogy and it still undersells what's happening right now. AI isn't a product category. It's a reordering of who gets to know things, who gets to build things, and who gets to decide what the rules are. The people steering it are worth watching closely. Not because they're heroes. Because they're not.
Last week, Sam Altman and Dario Amodei stood side by side in front of Narendra Modi at the India AI Summit and couldn't manage a handshake. Altman then told reporters he was "confused" about what had happened — on camera, with the footage playing on every screen behind him. The two men running the most powerful AI labs on earth got publicly petty in front of a world leader and lied about it afterward. That was last week.
This week it got harder. Dario Amodei walked into the Pentagon to face a Defense Secretary who gave him a simple choice: hand over Claude for unrestricted military use — autonomous weapons, mass surveillance, all of it — or lose the contract. Amodei said no. The same morning, Elon Musk's xAI signed the deal Anthropic refused. Grok is now in classified military systems. Musk, who thrives precisely where Altman and Amodei stumble — in rooms where the rules haven't been written yet — just became the Pentagon's preferred AI partner while his two rivals were managing their personal feud.
Meanwhile the market kept moving. Andrej Karpathy warned that 2026 will be the year the internet drowns in its own AI output — a slopacolypse, he called it, and the word is better than it sounds. IBM fell 13%. CrowdStrike retreated nearly 10%. Microsoft dropped 3%. AI isn't a feature you bolt onto enterprise software. For a growing list of categories, it's the replacement.
The founders are falling apart in public. The products don't care. Somewhere in that gap is the story of our moment.
Dario Said No. Elon Said Yes. The Pentagon Picked Its Side.
As this issue goes out, Dario Amodei is walking into the Pentagon for a meeting that Defense Secretary Pete Hegseth's own officials have described, with characteristic Washington bluntness, as a "shit-or-get-off-the-pot" scenario. Not a briefing. Not a negotiation. An ultimatum.
The dispute has been building for months. Claude is the only AI model on the Pentagon's classified networks — the systems where intelligence analysis, weapons development, and battlefield operations run. The Pentagon wants "all lawful use," meaning the military decides what's lawful and Claude executes. Anthropic has refused two specific demands: autonomous weapons and mass surveillance of Americans. Those are lines Claude won't cross, the company says, regardless of what the contract requires. This wasn't a surprise position. In January, Amodei posted his reasoning on X in plain language: "We should arm democracies with AI, but we should do so carefully and within limits." The Pentagon read it. They didn't like the limits part.
Hegseth's move is to designate Anthropic a "supply chain risk" — language typically reserved for foreign adversaries — which would void the DoD contract and force every Pentagon partner to drop Claude entirely. A $200 million contract, gone. Every classified Claude deployment, pulled. The Pentagon already has the replacement lined up.
That replacement is Grok. Musk's xAI quietly signed the "all lawful use" agreement Anthropic refused. Under the deal, confirmed by Axios, Grok gets embedded directly into GenAI.mil — the Defense Department's internal AI platform — giving 3 million military and civilian employees access at the highest security clearance level. xAI agreed to everything Anthropic won't. The timing of the announcement, landing the same day as Amodei's summons, may or may not have been deliberate. It reads as leverage either way.
"We should arm democracies with AI, but we should do so carefully and within limits."
— Dario Amodei, January 2026
Here's what the contrast actually means. Anthropic was built around a specific bet: that safe, honest AI is worth building even when it's commercially inconvenient. Refusing to enable autonomous weapons isn't a negotiating tactic — it's the company doing exactly what it said it would do. Musk made no equivalent promises, has every financial reason to deepen Pentagon relationships across his companies, and said yes without hesitation. He operates differently: fewer constraints, faster moves, more contracts signed. In Washington right now, that's a competitive advantage.
The U.S. military is now deploying AI from a developer with no public constraints on how it gets used, while the company that built hard limits around those exact scenarios faces being blacklisted. That's not just a business outcome. It's a signal every other AI company is reading right now. Building in limits costs contracts. Removing them doesn't.
This story has been covered as a procurement dispute. It's bigger than that. What comes out of today's meeting will tell you whether "responsible AI" is a real commercial position in 2026 or a slogan that holds until the first serious test.
↻ We'll track the outcome and keep you posted as this develops.
THE STRATEGIC READ
If you're running Claude in any regulated context — healthcare, finance, legal, government — you have a new variable to assess. Anthropic holding its limits under this kind of pressure is a feature, not a liability. But a company fighting for its biggest contract isn't making the same decisions as one operating from strength. Watch what comes out of Washington today.
SOURCES
Karpathy Called It the Slopacolypse. He Means the Internet Is Broken.
Andrej Karpathy doesn't overstate things. He coined "vibe coding" to describe something real. When he says something is changing, developers pay attention. On February 3rd, he posted that he'd been going back to RSS/Atom feeds because there's "a lot more higher quality longform and a lot less slop intended to provoke." Any product that looks different today but runs on the same engagement-driven incentives, he wrote, "will eventually converge to the same black hole." He was talking about X, LinkedIn, every feed optimized to keep you scrolling.
That post came a few weeks after one that stopped developers cold. On January 26th, Karpathy described how his coding workflow had flipped in under two months — from 80% manual to 80% AI agents — and called it "the biggest change in ~2 decades of programming." Then the warning: "I am bracing for 2026 as the year of the slopacolypse across all of GitHub, Substack, arXiv, X, Instagram, and generally all digital media."
"I am bracing for 2026 as the year of the slopacolypse across all of GitHub, Substack, arXiv, X, Instagram, and generally all digital media."
— Andrej Karpathy, January 26, 2026
What he's describing isn't a content quality problem. It's an economics problem. When the cost of producing an article, a code commit, a research abstract, or a social post drops to near zero, the bottleneck moves from production to verification. You can generate a thousand convincing-looking things in the time it used to take to write one. The question isn't "did someone make this?" It's "can I trust it?" And we don't have good tools for that yet.
The Reuters Institute put numbers to where this lands for media. News organizations are bracing for a 40% drop in search referrals over the next three years as AI answer engines — Google's AI Overviews, ChatGPT, Perplexity — intercept queries before anyone reaches a publisher's site. Only 38% of media executives feel good about journalism's future, down 22 points in four years. Publishers are responding by doubling down on original investigations (+91%) and analysis (+82%), while pulling back from the general news AI can reproduce for free (–38%).
The filtering problem is worse than it looks on paper. Platform labels for AI content can slash engagement by 15% to 80%, so the creators who've gotten good at generating it are simply not disclosing it. MIT researchers said in February that sorting slop from legitimate content will "take metadata and trusted sources." That infrastructure doesn't exist yet. Until it does, the slop circulates under human bylines.
Here's what's odd and instructive about Karpathy's RSS recommendation. He has access to every AI tool that exists. His solution to the coming flood of AI content is a 25-year-old syndication protocol that runs on simple subscription lists. The reason is obvious once you say it: RSS delivers what people chose to follow, without an algorithm deciding what to surface. You subscribe to a writer. They post. You read it. No engagement optimization, no sponsored slop, no viral garbage inserted because it performed well somewhere else.
The slopacolypse won't kill journalism. It will kill the version that tried to compete with social media on volume and speed and lost. The version that survives is the one that was always the point: people who know things, writing for people who need to know them, with their names on it.
THE STRATEGIC READ
If you use media for business — research, market intelligence, legal, anything where information quality matters — your sources are about to matter more than they have in a decade. The signal-to-noise gap is widening fast. Reputation is becoming the scarcest thing in the information business.
SOURCES
IBM Fell 13%. AI Isn't a Feature. It's the Replacement.
Last week, CrowdStrike fell 8% and Cloudflare fell 8% when Anthropic launched Claude Code Security. The market's read was fast: an AI that reasons through code the way a senior security researcher would isn't a tool that helps the security team. It threatens the business model that charges for the security team.
This week the market kept going. IBM fell 13%. CrowdStrike retreated another 10%. Microsoft dropped 3%. Salesforce is down roughly 30% year-to-date. Adobe has fallen 27%. The trigger is Claude Code's sustained pressure on enterprise software categories, made worse by the fact that Sam Altman said the quiet part out loud at the India AI Summit: "It is totally true that software is now far easier to create than ever before, and I'm sure that will be quite bad for some software companies." The CEO of OpenAI told a room full of enterprise software customers that his products are going to hurt them. The market took notes.
"I'm sure that will be quite bad for some software companies."
— Sam Altman, India AI Summit, February 2026
IBM's core business — consulting, workflow automation, IT management, services — runs on accumulated knowledge. IBM consultants know how to configure the system because they've done it a thousand times. That knowledge, packaged into people and software and billed at enterprise margins, has been the moat for decades. Claude Code doesn't come pre-loaded with that history. It reads whatever system it's pointed at, reasons through it, and tells you what it found. It doesn't need the thousand previous engagements. It figures it out from the code itself.
IBM is the most visible example, but it's a category problem. Enterprise software has been sold for thirty years on the premise that the software's value comes from what it already knows — its rules, its patterns, its integrations, its history. That's what you're buying. AI reasoning tools don't need pre-loaded knowledge. They apply reasoning to whatever they encounter. The moat legacy vendors built around their proprietary knowledge bases isn't being beaten by a better knowledge base. It's being made irrelevant by a system that doesn't need one.
The stock market isn't saying Claude Code Security is better than CrowdStrike today. It's pricing in a trajectory. A tool that reasons through code will improve faster than a tool that waits for humans to update its rules database. The model gets smarter. The rules database always lags. IBM's 13% drop is the market extending that logic across the whole category.
The SaaS model — sell access to software that knows things — is being replaced by the AI model: sell access to reasoning that figures things out. Anthropic, OpenAI, and Google aren't building features for enterprise software. They're building the enterprise software. The companies that understand this are repositioning around customer relationships, distribution, and compliance as their actual moat. The ones that don't are watching their stock prices say what their customers haven't quite said yet.
Last week, Pomelli moved the floor on commercial photography to zero. This week's version: the floor is moving in enterprise software. The question for every SaaS company — security, IT management, legal tech, financial analysis, HR, customer service — is the same one it is for photographers. What's left of your moat when the AI reasons through the problem from scratch, without a rules database, for a fraction of the cost?
THE STRATEGIC READ
Enterprise software buying cycles run 18 to 36 months. The decisions being made right now in Q1 2026 will determine which platforms are locked in when the capabilities gap closes. The question to ask every vendor this quarter isn't "do you use AI?" It's "is your core value something AI can reason through, or something it can't?" The list of honest answers to that second question is getting shorter.
SOURCES
THE BOTTOM LINE
This is a reordering of who gets to know things. Not a disruption. Not a platform shift. A wholesale redistribution of access to knowledge, capability, and economic power — moving faster than any institution, government, or career plan was built to handle.
Most people are going to lose ground. Not because they're slow or uninformed. Because the people controlling the pace — the ones who own the models, the infrastructure, the data — are moving faster than the systems that used to protect everyone else. The lawyer who spent a decade building expertise in contract review. The analyst with a specialized workflow nobody else understands. The journalist who owns a beat. The engineer who knows the legacy codebase better than anyone. The security consultant who's seen every threat pattern. All of them are watching what they built drain into tools that anyone with an API key can run.
The generation entering the workforce right now has no frame for what's coming. They were told to get educated, build expertise, grow a career. Nobody mentioned that the definition of expertise was being rewritten while they were studying for it. They're walking into a labor market that's repricing human knowledge faster than any generation before them has had to absorb. The printing press took decades to restructure who controlled information. The internet took years. This is taking months.
And the three men most responsible for the speed of all this — Sam Altman, Dario Amodei, Elon Musk — are, at this particular moment, a mess. Altman lied to reporters about footage they were watching in real time. Amodei is in the Pentagon right now fighting for a contract that could be voided by end of day. Musk signed the deal Amodei refused, handed the military unrestricted AI access, and is thriving — not despite the chaos, because of it. He moves when others hesitate. He signs when others are still reading the contract. In a moment that rewards speed and nerve over principle and process, that's just an advantage.
The products don't need the founders to have it together. IBM fell 13%. CrowdStrike fell 10%. The internet is about to be buried under its own AI output. A meeting at the Pentagon this morning may settle whether ethical constraints on AI are commercially survivable or not. None of it waited for anyone to feel ready.
You are living through the biggest reshuffling of power in modern history. The people running it are not equal to the moment. Plan accordingly.
TRACKING — WHAT CEOS SHOULD BE WATCHING
KEY PEOPLE & COMPANIES
NAME | ROLE | COMPANY | X |
|---|---|---|---|
Dario Amodei | CEO | Anthropic | |
Pete Hegseth | Secretary of Defense | U.S. Dept. of Defense | — |
Elon Musk | CEO | xAI | |
Andrej Karpathy | Independent researcher | — | |
Sam Altman | CEO | OpenAI |
ALL SOURCES
Compiled from sources across news sites, X threads, and company announcements. Cross-referenced with thematic analysis and edited by Anthony Batt, Harry DeMott and CO/AI's team with 30+ years of executive technology leadership.