THE NUMBER: 9 — days until the FTC defines "reasonable care" for AI. OpenAI already told you where they stand: they deleted "safely" from their mission statement four months ago.
In 1966, the tobacco industry did something quietly brilliant. Congress forced them to print the Surgeon General's warning on every pack, and instead of fighting it, the industry's lawyers recognized what it actually was: a liability transfer mechanism dressed up as public health policy. The manufacturer documented the risk. The consumer accepted it. The lawsuits stopped. Disclosure replaced safety as the legal standard, and cigarettes kept selling for another sixty years. The industry understood that the warning label isn't a concession. It's a release form.
OpenAI ran the same play in two moves. In November 2025, when they restructured into a for-profit, they quietly filed a new mission statement. The old language: "build AGI that safely benefits humanity." The new language: "ensure AGI benefits all of humanity." One word deleted. We covered it in February when Fortune traced six versions of the mission statement across nine years. At the time it read like legal housekeeping. It wasn't. It was the first document in the sequence.
The second document came Friday. OpenAI published its system card for GPT-5.3-Codex. (A system card is the safety disclosure OpenAI publishes every time it releases a new model, think of it as the ingredient label: what the model can do, what internal testing found concerning, what controls are in place. Most enterprise customers have never read one.) Page 14 carries a "high" cybersecurity risk rating, the first time OpenAI has put that designation on a deployed model. The language is precise: "could meaningfully enable real-world cyber harm if scaled or automated." The model shipped the same day the card published. That's not a disclosure failure. That's the system working exactly as intended. Remove safety from the promise. Add the warning to the product. The reader who didn't open the document absorbed the risk. The company that did read it and shipped anyway gets the revenue.
The FTC has nine days to decide if that's enough. "Reasonable care" is the legal standard for companies deploying technology that could cause consumer harm. If the guidance lands on "you were given the documentation and chose not to read it," every enterprise IT team that deployed GPT-5.3-Codex without opening the system card just inherited a liability their legal department didn't price.
The rest of the week reads as bets placed on top of that one. Anthropic got blacklisted for refusing to strip its own warning labels: the "no autonomous lethal targeting, no mass domestic surveillance" constraints the Pentagon wanted gone. Dario's response was to bid on drone work he believes his constraints allow, which is either principled strategy or the most expensive way in history to make a point. Sam Altman, meanwhile, closed $110B from Amazon, Nvidia, and SoftBank and expanded OpenAI's AWS deal to $100B over eight years, locking in the infrastructure layer so thoroughly that switching from OpenAI isn't a model migration, it's a cloud renegotiation. And MiniMax dropped M2.5 from Shenzhen, benchmarked against Claude Opus 4.6 at a lower price, under no equivalent disclosure requirements at all.
The tobacco industry's bet was right for sixty years. Disclosure worked better than safety as a liability strategy, and the market kept rewarding it until the class action suits came. OpenAI deleted "safely" in November. They shipped the warning label in March. We're nine days from the government deciding whether that sequence is legally sufficient.
Read the system card before March 11. Not because the FTC will definitely require it. Because if they do, your general counsel will ask why you didn't, and "we didn't know there was a document" is not the answer you want on record.
The warning label is only optional until someone decides it isn't.
On the site today: The full analysis of what OpenAI's system card actually says, why Anthropic's drone bid is a strategy not a stunt, and what China's $5B in robotics capital means for your supply chain → getcoai.com
From the Scroll: MiniMax M2.5 benchmarks against Claude at lower cost. Colorado's AI Act hits June 30. NVIDIA GTC is two weeks away → getcoai.com/scroll