The Inverse Problem, Part II: Seventy-Two Hours
The story of a government that banned a tool on Friday, used it to fight a war on Saturday, and replaced it with a weaker copy by Sunday.
Three days ago, I published an essay called “The Inverse Problem,” about what it means when a for-profit company becomes the last guardrail against autonomous killing machines and mass surveillance. I argued that the roles of regulator and regulated had reversed — that a corporation was defending democratic values while the government demanded the right to override them.
I should note that I use Claude — the AI at the center of this story — alongside Google’s Gemini and NotebookLM in my daily work. I’ll address that directly at the end. For now, what matters is what happened in the seventy-two hours after that essay went live, because the story accelerated beyond anything I could have predicted, proving the thesis in ways that border on the absurd.
Friday: The Ban
You know the broad strokes from my first essay. Anthropic refused to remove two restrictions from its military contract: no fully autonomous weapons, no mass domestic surveillance. The Pentagon demanded access for “all lawful purposes.” Anthropic said no. Defense Secretary Hegseth designated the company a “supply chain risk” — a designation previously reserved for foreign adversaries like Huawei. Trump ordered every federal agency to cease using Anthropic’s technology.
What I did not know when I published is what the Pentagon had actually been asking for behind the scenes. According to Axios, even as Hegseth was tweeting the supply chain designation, a senior defense official was on the phone offering Anthropic a last-minute deal. That deal would have required allowing the collection and AI-processed analysis of Americans’ personal data — geolocation, web browsing history, financial information purchased from data brokers. Real surveillance, using specific, commercially available data about American citizens, processed at scale by artificial intelligence.
Anthropic said no to that, too.
Friday Night: The Replacement
Hours after the ban, OpenAI CEO Sam Altman announced that his company had reached its own deal with the Pentagon for classified deployment. Altman said the agreement included the same red lines Anthropic had demanded: no mass surveillance, no autonomous weapons.
Here is the detail that matters more than anything else in this story, and I want you to read it carefully.
Anthropic’s contract demanded absolute prohibitions. The restrictions existed regardless of what any other law said. OpenAI’s contract permits the Pentagon to use its technology for “all lawful purposes” — and its red lines only apply where existing law already forbids the action.
This distinction is the entire game.
Collecting commercially available personal data on American citizens — your location, your browsing habits, your financial records — is already legal. The government has been buying this data from brokers for years. Lethal autonomous weapons systems are not firmly prohibited by any U.S. statute; they are only partially constrained by a Defense Department directive that the Defense Department itself can revise at any time. As one analysis of the contract language put it: the wording “repeatedly qualifies OpenAI’s prohibitions as dependent on existing restrictions. Anything not forbidden is permitted.”
So when OpenAI says “no mass surveillance” but qualifies it with “consistent with applicable law,” and the applicable law already permits mass data collection, what exactly has been prohibited? When OpenAI says “no autonomous weapons” but conditions it on regulations the Pentagon writes for itself, what exactly has been restricted?
Altman admitted on social media that the deal was “definitely rushed” and that “the optics don’t look good.” He was right about the optics. Whether he was right about the substance depends entirely on contract language that is not fully public and may never be.
After the contract details were published, Techdirt’s Mike Masnick argued the deal “absolutely does allow for domestic surveillance,” because it references compliance with Executive Order 12333 — a Reagan-era directive that intelligence agencies have long used to collect communications data on Americans by tapping lines outside the country.
The Inverse Problem just gained a new dimension: the roles of corporation and government have reversed, and now the language of safety itself can be deployed as a simulacrum — an image of restraint that functions as its opposite.
Saturday: The War
On February 28, the United States and Israel launched Operation Epic Fury against Iran. Strikes hit nuclear facilities, military infrastructure, and a government compound in Tehran. Ayatollah Ali Khamenei was killed.
And according to the Wall Street Journal, Axios, and Reuters, U.S. Central Command used Claude throughout the operation — for intelligence assessment, target identification, and simulation of battle scenarios. The same tool the president had banned the day before. The same company he had called “left-wing nut jobs.” The same technology the Pentagon had designated a supply chain risk equivalent to a foreign adversary.
Defense officials told reporters that removing Claude from the operational workflow was simply not possible on short notice. The model is embedded in classified systems in ways that cannot be unwound in a day, or a week, or even six months without significant risk to ongoing operations.
Consider what this means. On Friday, the U.S. government declared that Anthropic’s technology was so dangerous to national security that every contractor in the country must sever ties with it. On Saturday, the U.S. government used that same technology to prosecute the most consequential military operation of the year. One of these things is a lie. Both of them are true.
The R Street Institute’s Mark Dalton identified the core contradiction: the Pentagon considered Anthropic’s technology so vital to national defense that it threatened to invoke the Defense Production Act to retain access — and then, days later, designated that same company a supply chain risk. Dalton warned that the next time this designation is applied to a company with actual ties to a foreign adversary, its credibility will be diminished.
He is right, and he understates the problem. The credibility is gone entirely. The supply chain risk designation has been revealed as what it always was in this context: a political weapon dressed up as a security assessment.
Saturday and Sunday: The People Voted
While the government was banning Anthropic and using its technology to fight a war, something remarkable happened in the consumer market. Claude hit number one on Apple’s U.S. App Store, overtaking ChatGPT for the first time. A “Cancel ChatGPT” movement spread across Reddit and X, with users posting guides for deleting their accounts and migrating to Claude. Anthropic reported that free users increased over 60% since January, with daily sign-ups quadrupling.
The American public did in forty-eight hours what Congress has failed to do in a decade: it rendered a verdict on the ethics of AI deployment. With downloads. With app deletions. With the only ballot available.
This is both encouraging and damning. Encouraging because it suggests that when the stakes are made clear, people will act on their values. Damning because the fact that consumer behavior is the only functioning feedback mechanism in the most consequential technology debate of our time is itself a symptom of institutional failure.
Congress still has not held a hearing. No legislation has been introduced. The only democratic accountability in this entire saga has come from people tapping an icon on their phones.
Friday Through Sunday: The Employees
There is one more thread, and it may be the most revealing.
Before the deadline passed on Friday, more than 300 Google employees and over 60 OpenAI employees signed an open letter urging their companies to stand with Anthropic. The letter warned that the Pentagon was trying to divide the industry: “They’re trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand.”
Google DeepMind’s Chief Scientist, Jeff Dean, publicly endorsed the letter’s position, writing that mass surveillance violates the Fourth Amendment. More than 100 DeepMind employees sent a separate internal letter to leadership demanding restrictions on military applications.
And then Sam Altman signed the Pentagon deal anyway.
The fracture is stunning. OpenAI employees signed a letter saying the Pentagon’s demands were unacceptable. Hours later, their CEO accepted those demands. The Inverse Problem has moved inside the walls of the corporation itself — between the people who build the technology and the people who decide what to do with it.
What Seventy-Two Hours Proved
When I wrote the first essay, the Inverse Problem was a framework — a way of understanding a structural shift in which the expected roles of power and restraint had reversed. Three days later, it has become an empirical observation, confirmed by events:
A government banned a technology it cannot stop using. A company walked away from hundreds of millions of dollars rather than compromise on two principles almost every American would agree with. A competitor claimed identical principles while accepting contract language that may render those principles meaningless. Workers at that competitor’s company publicly opposed what their leadership did in their name. And the public, lacking any institutional mechanism for input, voted with the only ballot available to them — an app download.
The Inverse Problem is a condition. And seventy-two hours proved that the condition is deepening.
The Question That Now Has a Price Tag
On the same day Anthropic was blacklisted and OpenAI took the Pentagon deal, OpenAI announced a $110 billion funding round — $50 billion from Amazon, $30 billion from Nvidia, $30 billion from SoftBank — at a $730 billion pre-money valuation.
I am noting a coincidence that the market will inevitably interpret. One company said no to power and lost its government business. The other said yes and secured the largest private funding round in technology history, on the same day, from the companies that will build the physical infrastructure of AI for the next decade.
The Inverse Problem now has a price tag. Anthropic’s two red lines — a human in the loop before anyone dies, and no mass AI surveillance of American citizens — cost the company $200 million in government contracts and potentially its entire federal business ecosystem. OpenAI’s willingness to accept “all lawful purposes” language coincided with the infusion of more capital than most nations’ GDP.
The market is watching. Every AI company in the world is watching. And what they are learning is this: safety has a cost, and compliance has a reward, and the distance between them is measured in hundreds of billions of dollars.
What the Mirror Shows Now
I said in the first essay that the Inverse Problem is a mirror. What I see reflected now, seventy-two hours later, is worse than what I saw before.
I see a government that will use the word “risk” as a weapon against a company for exercising the same judgment the government relied on the next morning. I see a deal dressed in the language of safety that may contain no safety at all. I see workers at one of the world’s most powerful companies publicly opposing a decision their leadership made without them. I see a public that cares deeply and has almost no institutional channel through which to express it. And I see a technology — one that helped plan airstrikes on Tehran — that is too important for anyone to control and too dangerous for no one to try.
The Inverse Problem has become as real as it gets. It has a body count, a price tag, and an app store ranking.
And it is only Monday.
What Time Binds is a column about the moments where past assumptions collide with present realities. If you read the first essay, share this one. The people who need to understand the Inverse Problem are the ones who haven’t heard of it yet.
Full disclosure: I use Claude, Google’s Gemini, and NotebookLM in my daily work and am a paying subscriber to both Anthropic’s and Google’s AI products. I have no financial relationship with either company beyond those subscriptions. This essay was written with Claude’s assistance.


