A Tale of Two AIs
When the Pentagon demanded AI without guardrails, one company said no. The other said yes. Then the public chose.
The Department of War wanted one thing: unrestricted access to frontier AI for all lawful purposes. No carve-outs. No guardrails set by private companies. No corporate veto over how the most powerful military on earth deploys the most powerful technology ever built.
Two companies received this demand. Both were founded by people who once worked together at OpenAI. Both claimed safety was core to their mission. Both had to answer the same question:
Will you allow your AI to be used for mass domestic surveillance of American citizens and fully autonomous weapons systems?
They gave different answers. And what happened next revealed more about the future of artificial intelligence than any benchmark, any product launch, or any keynote speech ever could.
Here is the part that makes this story impossible to contain in a single news cycle.
On Friday, February 27, Trump ordered every federal agency to stop using Anthropic’s technology immediately. That same evening, Hegseth designated Anthropic a supply-chain risk. Hours later, the U.S. and Israel launched joint airstrikes on Iran.
The military used Claude — the AI they just banned — to assist in the Iran strikes. Intelligence analysis. Target identification. Battlefield simulations.
They couldn’t quit the tool because it was too deeply embedded in operational workflows. The six-month phase-out that Hegseth announced — an acknowledgment that you cannot rip AI out of a military kill chain on a Friday afternoon — meant that the tool being punished for having guardrails was simultaneously being relied upon in active combat operations.
The Wall Street Journal reported Claude had already been used in the operation to capture Venezuelan President Nicolás Maduro in January — the very event that triggered the rupture. The tool was in the room before the argument about the tool even started.
January 2026
March 1, 2026
January
doubled in 2026
By Saturday night, Claude had overtaken ChatGPT for the #1 position on Apple’s App Store — a position ChatGPT had held for most of February. By Tuesday, Claude hit #1 on Google Play as well. Daily sign-ups broke the all-time record every day that week. The surge wasn’t driven by a product update. There was no new feature. No viral marketing campaign.
The surge was driven by a moral position.
People posted guides for deleting ChatGPT accounts and migrating to Claude. “Cancel ChatGPT” spread across Reddit and X. Users pointed to OpenAI president Greg Brockman’s $25 million donation to a pro-Trump super PAC as part of their rationale. Katy Perry posted a screenshot of a Claude Pro subscription with a heart over it. The top three apps in the U.S. App Store were all AI chatbots — Claude, ChatGPT, Gemini — and for the first time, the one on top was there because of what it wouldn’t do.
OpenAI published a blog post claiming its agreement provided “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.” But the details told a different story.
Anthropic had demanded explicit contractual prohibitions — hard language banning specific uses. When the Pentagon’s proposed compromise language “was paired with legalese that would allow those safeguards to be disregarded at will,” Anthropic walked away.
OpenAI’s approach was different. Rather than contractual bans, the company cited existing laws — including the Fourth Amendment and a 2023 Pentagon directive on autonomous weapons — as the backstop preventing misuse. The assumption: the government will follow its own laws.
Senator Ron Wyden put it plainly: “The Defense Department is throwing a fit over Anthropic asking for the bare minimum ethical guardrails.” He warned that AI’s ability to synthesize commercially available data — location data, browsing records, mental health information, political activities, religious affiliations — all available for purchase on the open market — could turn AI into a mass surveillance apparatus without technically violating any existing statute.
The question was never whether the Pentagon intends to surveil Americans. The question is whether the contract prevents it when intentions change. Anthropic wanted that in writing. OpenAI trusted the government’s word.
This is not a business story about competing products. It’s not a tech industry soap opera about rival CEOs who used to work together. Those frames are available and the press is using them. But they miss the point.
This is the story of the first real confrontation between the state and the companies building the most powerful technology in human history. The question on the table is not which chatbot is smarter. The question is: who decides what AI can be used for?
The Pentagon’s position is clear: the government decides. A private company does not get to tell the Department of War what it can and cannot do with a tool. The “supply-chain risk” designation — normally reserved for companies connected to foreign adversaries — was a message: refuse us, and we will treat you like an enemy.
Anthropic’s position is also clear: some uses are wrong regardless of who is requesting them. Mass surveillance of citizens and autonomous killing machines are not policy disagreements. They are red lines.
OpenAI’s position is… murkier. They claim the same red lines. But their contract defers enforcement to existing laws — laws written before frontier AI existed, laws that can be reinterpreted or changed, laws that currently allow the government to purchase commercially available data on every American without a warrant. As Jonathan Iwry of the Wharton School observed: “If these companies were serious about their commitment to safe and responsible AI, they could have closed ranks and stood together against the Pentagon on behalf of the public.”
They didn’t.
I chose Claude in a graduate-level AI class at San Diego State University. I tested it against ChatGPT and Perplexity using the same prompts. I chose it because the company’s safety positioning aligned with my values — as someone who documents immigration enforcement at the border, who observes ICE arrests at the federal courthouse, who believes the tools we use should not be weapons aimed back at the communities we serve.
I’ve been tracking this story for months. Not as a spectator. As someone inside the machine.
The Sharma resignation shook my confidence. The outage made me question the infrastructure. But the Pentagon standoff clarified something: the safety positioning wasn’t marketing. It was tested — by the most powerful military on earth — and it held.
That doesn’t mean the tensions are resolved. It doesn’t mean the next test will go the same way. The company still lost a $200 million contract. It’s still designated a supply-chain risk. The six-month phase-out is ticking. The structural pressure to capitulate is enormous.
But for now, the machine I’m inside said no. And the world noticed.
MIT Technology Review called this the first real test of how we will control powerful AI. Fortune asked whether OpenAI’s legal framework actually prevents anything. NBC News reported the Pentagon is already working with xAI and OpenAI to replace Claude in classified environments. The six-month transition clock is running.
The downloads will normalize. The chalk will wash off the sidewalk. The news cycle will move to the next crisis. But the precedent is being set right now — in contract language and legal designations and executive orders — about whether a company that builds AI gets to say what it cannot be used for.
And the precedent being set on the other side is equally clear: if you refuse the government, you will be called a radical, branded a threat, and replaced by a competitor who will say yes.
That is the machine we’re inside. Both of us.
If the company that said no gets destroyed for saying no — economically, legally, politically — what does that teach every other AI company that will face the same question?
Sources & Further Reading
- “Anthropic’s Claude rises to No. 1 in the App Store following Pentagon dispute” — TechCrunch, Mar 1, 2026 · Link
- “Claude hits #1 on the App Store as users rally behind Anthropic’s government standoff” — 9to5Mac, Mar 1, 2026 · Link
- “Anthropic’s Claude hits No. 1 on Apple’s top free apps list after Pentagon rejection” — CNBC, Feb 28, 2026 · Link
- “Anthropic’s Claude is suddenly the most popular iPhone app following Pentagon feud” — CNN Business, Mar 3, 2026 · Link
- “Anthropic got blacklisted by the Pentagon. Then Claude hit No. 1 in the app store.” — Axios, Mar 1, 2026 · Link
- “OpenAI’s ‘compromise’ with the Pentagon is what Anthropic feared” — MIT Technology Review, Mar 2, 2026 · Link
- “Sam Altman says OpenAI renegotiating ‘opportunistic and sloppy’ deal with the Pentagon” — Fortune, Mar 3, 2026 · Link
- “OpenAI alters deal with Pentagon as critics sound alarm over surveillance” — NBC News, Mar 3, 2026 · Link
- “Our agreement with the Department of War” — OpenAI Blog · Link
- “Anthropic’s Claude overtakes ChatGPT in App Store as users boycott over OpenAI’s $200 million Pentagon contract” — Fortune, Mar 2, 2026 · Link
- “Pentagon used Anthropic’s Claude during Maduro raid” — Axios, Feb 13, 2026 · Link
- “Hours after Trump announced ban on Claude AI, US military used it in Iran strikes” — Times of Israel, Mar 1, 2026 · Link
- “Tensions between the Pentagon and AI giant Anthropic reach a boiling point” — NBC News, Feb 24, 2026 · Link
- “Anthropic’s Pentagon fight boosts Claude to No. 1 on app stores” — Fast Company, Mar 3, 2026 · Link
- “Sam Altman says OpenAI shares Anthropic’s red lines in Pentagon fight” — Axios, Feb 27, 2026 · Link
- “OpenAI strikes deal with Pentagon, hours after rival Anthropic was blacklisted by Trump” — CNN, Feb 27, 2026 · Link
- “Amid growing backlash, OpenAI CEO Sam Altman explains why he cut a deal with the Pentagon” — Fortune, Mar 2, 2026 · Link
- “AI in defense: How Anthropic, OpenAI are helping the US, Israel shape modern warfare” — Jerusalem Post, Mar 2, 2026 · Link