The Inverse Problem: When a For-Profit Company Becomes America's Last Guardrail Against Killer Robots
The story of a corporation that told the U.S. military "no" — and what it costs when the roles of regulator and regulated reverse — A What Time Binds Essay
Something broke in the American experiment this week, and almost nobody has the framework to understand it. So let me give you one.
In political science, there’s a concept you could call the Inverse Problem — when the actors you expect to play one role swap places entirely with the actors you expect to play the other. Regulators become deregulators. The regulated become the regulators. The machine designed to protect you becomes the thing you need protection from, and the machine designed to extract profit from you becomes the thing standing between you and catastrophe.
That is exactly what happened on February 27, 2026, when the President of the United States threatened a private AI company with “the full power of the presidency” — including criminal consequences — because it refused to build autonomous killing machines and mass surveillance tools for the Pentagon.
The company is Anthropic, maker of the Claude AI system. And here is the detail that should stop you cold: Anthropic is a for-profit company.
The Facts, Because They Matter
Let’s establish what actually happened before we talk about what it means.
Anthropic signed a contract worth up to $200 million with the Pentagon last summer. Claude became the first AI model to operate on the military’s classified networks, deployed through a partnership with Palantir Technologies. By all accounts, the arrangement was working. Claude was helping warfighters with logistics, planning, analysis — the kinds of applications that make modern defense function.
Then, in January, reports emerged that Claude had been used during the U.S. military operation that captured Venezuelan president Nicolás Maduro in Caracas. Anthropic couldn’t confirm or deny the specifics — classified operations are classified — but the company raised concerns that Claude’s use may have crossed lines that its terms of service explicitly prohibit.
That triggered a chain reaction. Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to a meeting. According to multiple reports, military officials warned they could invoke the Defense Production Act — a Cold War-era law — to seize broader authority over the technology. Hegseth gave Anthropic until 5 PM Friday to remove all restrictions on military use of Claude.
Anthropic’s response came in a public statement: they cannot permit Claude to be used for two specific purposes. Just two. Fully autonomous weapons — weapons that kill without a human being in the decision chain. And mass domestic surveillance — the wholesale collection and AI-processed analysis of Americans’ personal data.
That’s it. That was the line. Use our AI for anything else. Plan operations. Analyze intelligence. Optimize logistics. Build better systems. Just don’t build robots that kill people on their own, and don’t spy on the entire American population with it.
The Pentagon’s response? Hegseth designated Anthropic a “supply chain risk.” President Trump announced a six-month phaseout of all Anthropic products across the federal government and posted on Truth Social, calling the company “left-wing nut jobs” who had made “a disastrous mistake trying to strong arm the Department of War.”
The Pentagon’s Undersecretary for Research and Engineering, Emil Michael, called Amodei “a liar” with “a God-complex” who “wants nothing more than to try to personally control the US Military.”
For saying no to killer robots and mass surveillance.
The Inverse Problem
Here is where you need to pay attention, because this is the moment the world turned upside down.
For the past decade, the dominant anxiety about artificial intelligence has gone something like this: powerful technology companies would build increasingly dangerous AI systems, and governments — slow, bureaucratic, and ultimately accountable to voters — would need to step in and regulate them. The corporation would push for maximum capability. The government would push for maximum safety. That is the normal polarity of the relationship between power and restraint in a democracy.
What happened this week is the precise inversion of that dynamic.
A for-profit corporation — one that stood to make $200 million from this single contract alone — voluntarily limited its own product. It drew lines around what its technology should not do. No law required it. No regulator ordered it. The people who built the technology simply understood, better than anyone in government, what it was capable of becoming.
And the United States government, the entity constitutionally charged with providing for the common defense while securing the blessings of liberty, demanded that those safety limits be removed. When the company refused, the government moved to destroy it commercially.
Read that again if you need to.
The corporation said: We believe a human being should be in the chain of command before anyone is killed.
The government said: Remove that restriction, or we will end you.
Why “For-Profit” Is the Detail That Changes Everything
Anthropic competes in the open market. It raises venture capital. It answers to investors. Specifically, Anthropic is a Public Benefit Corporation — a for-profit entity that has raised billions from Google and Salesforce, investors who expect returns.
Its corporate structure includes something called a Long-Term Benefit Trust, an independent body that can select and remove board members to ensure the company stays aligned with its mission. It was deliberately designed so that safety considerations couldn’t be easily overridden by profit motives. And still — Anthropic has revenue targets, employees with stock options, and competitors like OpenAI, Google, and Meta breathing down its neck.
And it just walked away from a $200 million government contract — and potentially its entire federal business — because it wouldn’t agree to let its technology kill people without human oversight.
If Anthropic were a nonprofit, this would be admirable. The fact that it is a for-profit company makes it extraordinary. A company that exists to generate shareholder value decided that some values cannot be sold. That there is a price too high, even when you’re in the business of being paid.
When was the last time you saw that happen in America?
The Silence of Congress
Ben Rhodes, who served as deputy national security advisor under President Obama, put it plainly in a recent interview: in any normal era, this is exactly the kind of issue that would produce legislation. Congress would hold hearings. International negotiations would establish norms. Treaties would be drafted — the way we drafted the Chemical Weapons Convention, the way we built the Nuclear Non-Proliferation Treaty.
Instead, Congress has done nothing. The international community is fractured. And the only regulation that exists on the most powerful technology humanity has ever built is the terms of service written by the companies that built it.
Think about that for a moment. The only thing standing between the United States military and fully autonomous AI weapons systems is a corporate policy document (of one company). No law. No treaty. No constitutional provision. A terms of service agreement from a company in San Francisco.
And the government just tried to rip it up.
The Domino That Didn’t Fall — Yet
Here is the part of the story that offers a thin, complicated kind of hope.
Within hours of Trump’s announcement, OpenAI CEO Sam Altman sent a memo to his employees declaring that OpenAI shares Anthropic’s red lines. “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons,” Altman wrote, “and that humans should remain in the loop for high-stakes automated decisions.”
This is significant. OpenAI and Anthropic are fierce competitors. Altman and Amodei have deep personal and professional tensions that trace back to Amodei’s departure from OpenAI in 2021. For Altman to publicly align with Anthropic on this issue amounts to an entire industry drawing a line in the sand.
The solidarity is also fragile. The Pentagon has reportedly already approved OpenAI’s proposed safety red lines in a new arrangement. The government’s position may already be softening — or the government is simply shopping for a more compliant partner. The dynamics are shifting by the hour.
What the Inverse Problem Tells Us About Where We Are
The Inverse Problem extends well beyond this particular news cycle. It is a structural feature of the moment we are living through.
When the institutions designed to protect the public interest instead pursue unchecked power, and the institutions designed to pursue profit instead protect the public interest, you are not living in a system that is functioning correctly. You are living in a system where the wiring has been reversed.
I want to be precise about what I’m saying here: neither actor is playing the role it was designed for. We need democratic governments that regulate powerful technology in the public interest. We need corporations that innovate within boundaries set by democratic accountability. What we have instead is a government demanding the right to build autonomous killing machines and a corporation begging it to stop.
Experts at Lawfare note that using the Defense Production Act to force Anthropic to strip its safety guardrails would be “without precedent.” The DPA allows the government to prioritize its contracts — to jump to the front of the line. It has never been used to force a company to change the fundamental nature of its product. Forcing Anthropic to remove its ethical restrictions would raise First Amendment questions: the government would be compelling a company to express values it rejects.
Anthropic has said it will challenge the supply chain risk designation in court, calling it “legally unsound” and warning it would “set a dangerous precedent for any American company that negotiates with the government.”
We are in genuinely uncharted territory, both technologically and constitutionally.
The Question This Moment Is Asking You
Here is what I keep coming back to.
The people who built Claude — the engineers, the researchers, the safety teams at Anthropic — know more about what this technology can do than anyone in the Pentagon. They know more about it than Pete Hegseth. They know more about it than Donald Trump. They have spent years studying the failure modes, the edge cases, the scenarios where AI systems behave in ways their creators did not intend.
And those people are scared enough that they built restrictions into their own product, at the cost of their own revenue, knowing it might cost them everything.
When the people who understand the technology best are the ones most afraid of what it can do, and the people who understand it least are the ones most eager to unleash it, that should tell you something. That should tell you everything.
Anthropic’s two red lines are not radical positions. A human should decide before a machine kills someone. The government should not use AI to conduct mass surveillance on its own citizens. These are ideas that, stated plainly, almost every American would agree with. They are the bare minimum of civilized restraint.
And yet here we are, watching a for-profit company fight the federal government to preserve them.
The inverse problem is a mirror. And what it’s reflecting back at us right now is a country where the last line of defense for democratic values might be a terms of service agreement, written by a company that technically exists to make money.
If that doesn’t keep you up at night, you’re not paying attention.
What Time Binds is a column about the moments where past assumptions collide with present realities. If this piece resonated, share it. The inverse problem only gets worse when people aren’t watching.


