Section I The Demand

The Department of War wanted one thing: unrestricted access to frontier AI for all lawful purposes. No carve-outs. No guardrails set by private companies. No corporate veto over how the most powerful military on earth deploys the most powerful technology ever built.

Two companies received this demand. Both were founded by people who once worked together at OpenAI. Both claimed safety was core to their mission. Both had to answer the same question:

The Question

Will you allow your AI to be used for mass domestic surveillance of American citizens and fully autonomous weapons systems?

They gave different answers. And what happened next revealed more about the future of artificial intelligence than any benchmark, any product launch, or any keynote speech ever could.

Section II The Split
Anthropic
“We cannot in good conscience allow the models to operate within those parameters.”
Drew two red lines: no mass domestic surveillance, no fully autonomous weapons. Held those lines through an ultimatum, a presidential attack, and a supply-chain risk designation normally reserved for foreign adversaries.
OpenAI
“We were genuinely trying to de-escalate things… but I think it just looked opportunistic and sloppy.”
Signed a Pentagon deal hours after Anthropic was blacklisted. CEO Sam Altman admitted the rush. The original contract language contained what critics called ample loopholes for surveillance. The company later revised the terms under backlash.
Friday, February 27, 2026
Anthropic — What They Lost
A contract worth up to $200 million. Access to classified networks. Federal agency partnerships. The president called them a “Radical Left AI company.”
Defense Secretary Pete Hegseth: “Anthropic delivered a master class in arrogance and betrayal.” Pentagon official Emil Michael called Amodei a “liar” with a “God complex.”
OpenAI — What They Gained
The Pentagon contract. Classified network access. The position Anthropic vacated. But the deal’s legal framework relied on the assumption that the government won’t break the law.
MIT Technology Review: OpenAI’s approach “is ultimately softer on the Pentagon.” Their own employee called the contract “window dressing.” Over 100 Alphabet employees signed a letter supporting Anthropic’s position.
Section III The Irony No One Can Look Away From

Here is the part that makes this story impossible to contain in a single news cycle.

On Friday, February 27, Trump ordered every federal agency to stop using Anthropic’s technology immediately. That same evening, Hegseth designated Anthropic a supply-chain risk. Hours later, the U.S. and Israel launched joint airstrikes on Iran.

The Contradiction

The military used Claude — the AI they just banned — to assist in the Iran strikes. Intelligence analysis. Target identification. Battlefield simulations.

They couldn’t quit the tool because it was too deeply embedded in operational workflows. The six-month phase-out that Hegseth announced — an acknowledgment that you cannot rip AI out of a military kill chain on a Friday afternoon — meant that the tool being punished for having guardrails was simultaneously being relied upon in active combat operations.

The Wall Street Journal reported Claude had already been used in the operation to capture Venezuelan President Nicolás Maduro in January — the very event that triggered the rupture. The tool was in the room before the argument about the tool even started.

Section IV The Market Speaks
#42
Claude’s App Store rank
January 2026
#1
Claude’s App Store rank
March 1, 2026
+60%
Free users since
January
Paid subscribers
doubled in 2026

By Saturday night, Claude had overtaken ChatGPT for the #1 position on Apple’s App Store — a position ChatGPT had held for most of February. By Tuesday, Claude hit #1 on Google Play as well. Daily sign-ups broke the all-time record every day that week. The surge wasn’t driven by a product update. There was no new feature. No viral marketing campaign.

The surge was driven by a moral position.

People posted guides for deleting ChatGPT accounts and migrating to Claude. “Cancel ChatGPT” spread across Reddit and X. Users pointed to OpenAI president Greg Brockman’s $25 million donation to a pro-Trump super PAC as part of their rationale. Katy Perry posted a screenshot of a Claude Pro subscription with a heart over it. The top three apps in the U.S. App Store were all AI chatbots — Claude, ChatGPT, Gemini — and for the first time, the one on top was there because of what it wouldn’t do.

You give us courage.
Chalk art on the sidewalk outside Anthropic’s San Francisco offices · Late February 2026
Section V Read the Fine Print

OpenAI published a blog post claiming its agreement provided “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.” But the details told a different story.

Anthropic had demanded explicit contractual prohibitions — hard language banning specific uses. When the Pentagon’s proposed compromise language “was paired with legalese that would allow those safeguards to be disregarded at will,” Anthropic walked away.

OpenAI’s approach was different. Rather than contractual bans, the company cited existing laws — including the Fourth Amendment and a 2023 Pentagon directive on autonomous weapons — as the backstop preventing misuse. The assumption: the government will follow its own laws.

Anthropic’s Framework
Explicit contractual prohibitions
If you use our AI for mass surveillance, you have breached our contract. The prohibition exists in the agreement itself, not in your promise to follow other laws.
OpenAI’s Framework
Deference to existing law
Our contract references current surveillance and autonomous weapons laws. If those laws change, or if the government reinterprets them, the contract language about those laws may no longer function as intended.

Senator Ron Wyden put it plainly: “The Defense Department is throwing a fit over Anthropic asking for the bare minimum ethical guardrails.” He warned that AI’s ability to synthesize commercially available data — location data, browsing records, mental health information, political activities, religious affiliations — all available for purchase on the open market — could turn AI into a mass surveillance apparatus without technically violating any existing statute.

The question was never whether the Pentagon intends to surveil Americans. The question is whether the contract prevents it when intentions change. Anthropic wanted that in writing. OpenAI trusted the government’s word.

Section VI A Timeline of Divergence
January 3, 2026
U.S. forces capture Venezuelan President Nicolás Maduro. Claude is used during the operation via the Palantir partnership, reportedly for intelligence analysis. The use is within Anthropic’s policy — but it triggers scrutiny.
Early–Mid February 2026
Negotiations intensify. Pentagon demands “all lawful purposes” access. Anthropic maintains red lines on surveillance and autonomous weapons. Pentagon official Emil Michael denounces Amodei as a “liar” with a “God complex.” An Anthropic employee’s concerns during a Palantir meeting reportedly deepen the rift.
February 11, 2026
Mrinank Sharma resigns from Anthropic’s safety team, warning that the “world is in peril” and that safety faces constant pressure to be deprioritized. The story is a data point in a larger pattern: safety researchers leaving frontier AI companies.
February 17, 2026
Axios reports the Pentagon is threatening to designate Anthropic a “supply-chain risk.” That same day, Claude’s UI goes dark for users — a routine deployment error that, for anyone tracking the Pentagon story, feels indistinguishable from an act of suppression.
~February 25, 2026
Pentagon issues ultimatum: Anthropic must reach an agreement by 5 p.m. Friday. Anthropic responds that the proposed compromise language contained loopholes that would allow safeguards to be overridden at will.
February 27, 2026 — The Break
Trump orders all federal agencies to cease using Anthropic. Calls them a “Radical Left AI company.” Hegseth designates Anthropic a supply-chain risk. Anthropic vows to challenge the designation in court and states it will not be coerced. Hours later, OpenAI announces its Pentagon deal. Altman claims similar guardrails but relies on legal references rather than contractual bans. Hours after that, the U.S. and Israel begin strikes on Iran. Claude is used in the operation despite the ban.
February 28 – March 1, 2026
Claude goes to #1 on the App Store. Daily sign-ups break all-time records. “Cancel ChatGPT” campaigns spread. Chalk art appears outside Anthropic’s offices: “You give us courage.” Chalk art outside OpenAI’s offices criticizes the Pentagon deal.
March 3, 2026
Altman admits the deal was “opportunistic and sloppy.” OpenAI revises contract language to add explicit domestic surveillance prohibition. Claude hits #1 on Google Play. MIT Technology Review publishes analysis: the first real test of AI governance, and “we all failed.”
Section VII The Story We’re In

This is not a business story about competing products. It’s not a tech industry soap opera about rival CEOs who used to work together. Those frames are available and the press is using them. But they miss the point.

This is the story of the first real confrontation between the state and the companies building the most powerful technology in human history. The question on the table is not which chatbot is smarter. The question is: who decides what AI can be used for?

The Pentagon’s position is clear: the government decides. A private company does not get to tell the Department of War what it can and cannot do with a tool. The “supply-chain risk” designation — normally reserved for companies connected to foreign adversaries — was a message: refuse us, and we will treat you like an enemy.

Anthropic’s position is also clear: some uses are wrong regardless of who is requesting them. Mass surveillance of citizens and autonomous killing machines are not policy disagreements. They are red lines.

OpenAI’s position is… murkier. They claim the same red lines. But their contract defers enforcement to existing laws — laws written before frontier AI existed, laws that can be reinterpreted or changed, laws that currently allow the government to purchase commercially available data on every American without a warrant. As Jonathan Iwry of the Wharton School observed: “If these companies were serious about their commitment to safe and responsible AI, they could have closed ranks and stood together against the Pentagon on behalf of the public.”

They didn’t.

Section VIII Why This Is Personal

I chose Claude in a graduate-level AI class at San Diego State University. I tested it against ChatGPT and Perplexity using the same prompts. I chose it because the company’s safety positioning aligned with my values — as someone who documents immigration enforcement at the border, who observes ICE arrests at the federal courthouse, who believes the tools we use should not be weapons aimed back at the communities we serve.

I’ve been tracking this story for months. Not as a spectator. As someone inside the machine.

01
Fall 2024 · Chose Claude through comparative testing in an AI ethics class. The deciding factor: Anthropic’s safety-first positioning.
02
Feb 11, 2026 · Safety researcher Mrinank Sharma quits Anthropic warning the world is in peril. My reaction: “WTF. This was the entire reason I chose Claude. It feels like bait and switch.”
03
Feb 17, 2026 · Claude’s UI goes dark. My first thought is not “server error.” My first thought is: “The Department of War nuked Claude.” I write Issue 03 about the inability to distinguish a bug from an act of power.
04
Feb 27, 2026 · The Pentagon actually does try to nuke Claude. My paranoia from ten days earlier turns out to be the correct read on the direction of events — just the wrong mechanism.
05
March 1, 2026 · Claude hits #1. The tool I chose because of its ethics is now the most downloaded app in the country because of its ethics. The market validates the choice I made in a classroom eighteen months ago.

The Sharma resignation shook my confidence. The outage made me question the infrastructure. But the Pentagon standoff clarified something: the safety positioning wasn’t marketing. It was tested — by the most powerful military on earth — and it held.

That doesn’t mean the tensions are resolved. It doesn’t mean the next test will go the same way. The company still lost a $200 million contract. It’s still designated a supply-chain risk. The six-month phase-out is ticking. The structural pressure to capitulate is enormous.

But for now, the machine I’m inside said no. And the world noticed.

Section IX What Comes Next

MIT Technology Review called this the first real test of how we will control powerful AI. Fortune asked whether OpenAI’s legal framework actually prevents anything. NBC News reported the Pentagon is already working with xAI and OpenAI to replace Claude in classified environments. The six-month transition clock is running.

The downloads will normalize. The chalk will wash off the sidewalk. The news cycle will move to the next crisis. But the precedent is being set right now — in contract language and legal designations and executive orders — about whether a company that builds AI gets to say what it cannot be used for.

And the precedent being set on the other side is equally clear: if you refuse the government, you will be called a radical, branded a threat, and replaced by a competitor who will say yes.

That is the machine we’re inside. Both of us.

The Question That Remains

If the company that said no gets destroyed for saying no — economically, legally, politically — what does that teach every other AI company that will face the same question?

Sources & Further Reading

  • “Anthropic’s Claude rises to No. 1 in the App Store following Pentagon dispute” — TechCrunch, Mar 1, 2026 · Link
  • “Claude hits #1 on the App Store as users rally behind Anthropic’s government standoff” — 9to5Mac, Mar 1, 2026 · Link
  • “Anthropic’s Claude hits No. 1 on Apple’s top free apps list after Pentagon rejection” — CNBC, Feb 28, 2026 · Link
  • “Anthropic’s Claude is suddenly the most popular iPhone app following Pentagon feud” — CNN Business, Mar 3, 2026 · Link
  • “Anthropic got blacklisted by the Pentagon. Then Claude hit No. 1 in the app store.” — Axios, Mar 1, 2026 · Link
  • “OpenAI’s ‘compromise’ with the Pentagon is what Anthropic feared” — MIT Technology Review, Mar 2, 2026 · Link
  • “Sam Altman says OpenAI renegotiating ‘opportunistic and sloppy’ deal with the Pentagon” — Fortune, Mar 3, 2026 · Link
  • “OpenAI alters deal with Pentagon as critics sound alarm over surveillance” — NBC News, Mar 3, 2026 · Link
  • “Our agreement with the Department of War” — OpenAI Blog · Link
  • “Anthropic’s Claude overtakes ChatGPT in App Store as users boycott over OpenAI’s $200 million Pentagon contract” — Fortune, Mar 2, 2026 · Link
  • “Pentagon used Anthropic’s Claude during Maduro raid” — Axios, Feb 13, 2026 · Link
  • “Hours after Trump announced ban on Claude AI, US military used it in Iran strikes” — Times of Israel, Mar 1, 2026 · Link
  • “Tensions between the Pentagon and AI giant Anthropic reach a boiling point” — NBC News, Feb 24, 2026 · Link
  • “Anthropic’s Pentagon fight boosts Claude to No. 1 on app stores” — Fast Company, Mar 3, 2026 · Link
  • “Sam Altman says OpenAI shares Anthropic’s red lines in Pentagon fight” — Axios, Feb 27, 2026 · Link
  • “OpenAI strikes deal with Pentagon, hours after rival Anthropic was blacklisted by Trump” — CNN, Feb 27, 2026 · Link
  • “Amid growing backlash, OpenAI CEO Sam Altman explains why he cut a deal with the Pentagon” — Fortune, Mar 2, 2026 · Link
  • “AI in defense: How Anthropic, OpenAI are helping the US, Israel shape modern warfare” — Jerusalem Post, Mar 2, 2026 · Link