Pinch to Zoom
Anthropic and Abstraction
I spent last week trying not to fall down a mountain in Myoko Kogen, Japan. Between runs, I had headphones playing three podcast interviews on repeat: Ben Thompson on Stratechery interviewing Gregory Allen, Thompson on the a16z show, and Ezra Klein talking to Dean Ball. All three circling the same collision — Anthropic vs. the Department of War over what Claude can do, what it should do, and who gets to decide.
By the time I limped back to the lodge in the evening, I’d noticed something odd. The sharpest structural thinker in tech, the guy who built his entire career on telling everyone else they’re operating at the wrong abstraction level, had somehow gotten the most important story of the year mostly wrong. And he did it precisely because he was operating at the wrong abstraction level.
The irony was pretty good.
Here’s the thing: I think Thompson is right that this is fundamentally about power. Right that AI might become a rival power base. Right that people with guns will notice. Almost nobody else was saying it, and that matters. But he reached for the grand framework before he’d understood what actually happened on the ground. And when you slow down and look at the specifics — the actual timeline, the actual terms, the actual gap between what Thompson’s arguing for and what would happen if you tried it — the whole structure sort of collapses.
This matters because Thompson’s version is the best version of the case. Which means if it falls apart on the details, the cruder versions — “Anthropic is woke,” “Dario has a god complex” — don’t have a chance.
Okay, but what actually happened?
The narrative everyone’s running with is “Anthropic defied the Pentagon.” That’s not wrong, exactly, but it’s wrong in the way that “Hannah Montana caused a MySpace customization boom” is technically true but misses the actual mechanism.
In July 2025, the Trump administration signed a contract with Anthropic. Two usage restrictions: Claude wouldn’t be used for mass surveillance of Americans. Claude wouldn’t be used for autonomous weapons without human in the loop. Both sides shook hands on it.
Six months later, the DoW asked them to remove those restrictions. Anthropic said no. The DoW said you have until 5:01 PM Friday. Anthropic didn’t capitulate. The administration designated them a supply chain risk — the term they use for foreign adversaries like Huawei — and told all federal agencies to stop using Anthropic’s stuff.
Here’s the surreal part, via Ezra Klein: the military was actively fighting Iran using Claude while designating Claude’s maker a national security threat. Anthropic didn’t start this. The DoW changed its own mind and then punished Anthropic for not changing along with it.
That distinction matters more than it sounds.
The appeal of Thompson’s argument (and why it stops working when you zoom in)
Thompson’s core logic: if AI is as powerful as we say — potentially nuclear-weapon level — then private companies controlling it become intolerable to governments. Governments have the guns. They’ll assert control. Anthropic’s leadership means well but doesn’t understand this reality.
He pulls the Oppenheimer analogy (Truman decided on the bomb, not the scientists). COVID (experts shouldn’t make political tradeoffs that belong to elected leaders). And he raises a genuinely spiky point about China and Taiwan: if the U.S. gets asymmetric AI advantage, China’s optimal game-theory move might be to destroy TSMC, so Anthropic’s pro-chip-sanctions position doesn’t account for that consequence.
This is real thinking. The insight about power dynamics is solid. I actually agree on Taiwan — maintained interdependence is probably safer than a clean break. But then you zoom in and every supporting pillar has a crack in it.
The COVID analogy breaks down immediately
Thompson says experts shouldn’t make political tradeoffs — that’s for elected leaders. He’s right about COVID, where epidemiologists basically made economic and social policy by proxy. But that’s not what’s happening here, and the difference is everything.
On autonomous weapons, Anthropic’s position is a straightforward engineering spec: our system isn’t reliable enough to make lethal decisions without a human reviewing them first. This is not a policy preference. This is a nuclear reactor manufacturer writing into a contract “this reactor is not certified for autonomous operation.” Nobody accuses manufacturers of overriding democratic authority. Nobody pretends it’s anything but professional judgment about what the product can and can’t do.
Gregory Allen, during his interview with Thompson, confirmed it: Anthropic’s saying the tech isn’t mature enough for deployment yet, not that it shouldn’t be developed. That’s not an expert wandering into political territory. That’s engineers telling you what they built.
The specifics back this up. This week the Wall Street Journal reported that AI is helping U.S. forces process 3,000+ targets in Iran per week. Humans are still deciding. And in that same campaign, U.S. forces probably hit a girls’ elementary school and killed dozens of children. That’s the system with human oversight. The case for removing human oversight doesn’t need a philosophical framework. It needs a newspaper.
On mass surveillance, it’s more subtle. Claude can process massive datasets — unlike autonomous weapons, this isn’t a capability limitation. But Anthropic isn’t imposing some novel policy thing. They’re declining to help the government exploit loopholes in surveillance laws written for a world with human-scale friction — laws AI now renders meaningless. That’s not overriding democracy. It’s refusing to help circumvent protections before democracy has had a chance to update them.
Compare this to what OpenAI agreed to: hope that a few engineers in a classified facility will blow the whistle if things get bad. Put another way: bet your freedom and career on people following through. More on that rabbit hole below.
Thompson is zoomed too high — and his own framework proves it
His strongest point: if we let private companies impose military policy constraints, what happens when a less sympathetic company imposes less sympathetic restrictions? It’s legitimate. It’s also exactly why you can’t abstract this yet.
We’re so early in understanding AI + military + democracy that trying to derive general principles top-down leads you to conclusions that evaporate the moment they hit reality. The reactor analogy. The COVID parallel. We already saw both crack. Each specific confrontation generates information you cannot possibly derive in advance by pure reasoning. Thompson’s trying to jump straight to the answer. But the messy process of running all the specifics is the answer, at least for now.
This is why Allen’s Cuban Missile Crisis analogy hits. He suggested: if you ran this AI-government relationship through a thousand simulations, the scenarios that end well probably all have a terrifying close call early on that forces real conversations. Whether Dario planned this or just held firm and the consequences followed naturally, it functions the same way — a controlled dose of power collision while the stakes are $200M and two narrow terms, not an existential crisis over AGI.
The thing Thompson misses: confrontation is democratic
Here’s where his analysis most misses the mark. Thompson keeps saying he wants Congress to legislate on surveillance. Everyone does! But consider the counterfactual: if Anthropic had just accepted the DoW’s revised terms, what happens next?
All of this disappears. Thompson’s articles. Klein’s episode. Allen’s analysis. The open letter from hundreds of Google and OpenAI employees. Members of Congress actually engaging. The entire public conversation stays classified and buried.
Thompson invokes democratic legitimacy to argue the government should decide. But democracy doesn’t begin with Congress voting. It begins with an informed electorate. This confrontation works like a vaccine — controlled exposure while stakes are manageable, building immunity before things become existential.
An administration that’s had a shaky relationship with democratic norms (no one needs a reminder of the details) cannot credibly invoke democratic legitimacy as its trump card while simultaneously working to keep the conversation classified and secret.
OpenAI’s deal is a different kind of collapse
The administration settled with OpenAI on “any lawful use” — the formulation Anthropic refused. OpenAI claims the same red lines (no mass surveillance, no autonomous weapons) but different enforcement: forward-deployed engineers with security clearances, classifiers that flag violations, right to terminate the contract if breached.
It sounds like a compromise. It’s actually just deferred confrontation dressed in bureaucratic language. And it fails from both directions.
From the safety side: if the government wants to run mass surveillance, it doesn’t do it through one big Claude instance. You distribute the work across many instances — no single one has enough context to see the pattern. And those forward-deployed engineers? They’d need to blow the whistle on a classified program. End their careers. Face legal liability. Those aren’t guardrails. Those are hopes.
From the Pentagon’s perspective: You just fought a war with Anthropic because you couldn’t live with two stated, known contractual boundaries. Your solution is a different company that retains the right to terminate and pull its engineers if it thinks you’ve crossed a line? How is that more operational certainty and not less? At least with Anthropic you knew exactly where the boundaries were.
This is the zoom problem in miniature. “The government decides” sounds clean. But zoom to mass surveillance specifically and you’ve got:
1930s phone-tap laws
Fourth Amendment written for human-scale effort
AI that can process every American’s commercially available data at a scale those laws never imagined
A few engineers in a classified facility who’d have to recognize what’s happening and risk their careers to stop it
That’s not an abstract philosophical debate about authority. That’s a specific, testable question: should an AI system do mass pattern-matching on commercially purchased data about American citizens at a scale no law anticipated, in secret, with no public oversight?
I suspect everyone involved — Anthropic, OpenAI, Dean Ball, Ben Thompson — would say no if asked directly. But only Anthropic structured their deal to actually prevent it. “The government decides” sounds clean until you ask “decides what, exactly?” — and the answer turns out to be something almost nobody would defend in the open.
The Musk thing is the thing
Still think this is about neutral principles? Consider the asymmetry.
Elon Musk unilaterally shut off Starlink to a Ukrainian military operation because he decided Russian nuclear escalation risk was too high. No consultation with the U.S. government. He did this multiple times — avoided activating Starlink over Taiwan at Putin’s request, threatened to shut down America’s only astronaut transport during a personal dispute with Trump.
Result: no supply chain risk designation. Ultimately a bigger contract.
Anthropic states two contractual limitations — during a negotiation, in advance, transparently — gets designated for corporate death.
Same administration. Opposite rules. Context matters.
What Gregory Allen actually said
The smartest voice in this might be Gregory Allen — national security insider at CSIS who fully believes the military needs operational control and still thinks the DoW handled this terribly. The gap between the positions, he said, might not even be that large. Traditional defense contractors impose worse terms without anyone threatening corporate extinction.
But his most important line was about Dario himself. This isn’t a naive idealist. He’s been engaging national security since before he had money or power. He showed up to DoD meetings at Google Brain and converted an entire generation of safety-focused technologists into enthusiasts for autonomous weapons development — just not deployment without human review yet.
“What a gift,” Allen said. “Don’t take us backwards.”
That might be the most important sentence anyone said all week.
The vaccine we didn’t ask for but apparently needed
I don’t think Anthropic is naive. I don’t think they stumbled into this. Whether or not every move was planned, they chose two narrow, defensible positions — positions the government can’t even publicly argue for — and held firm. The result is a national conversation about AI, surveillance, weapons, and democratic control that wasn’t happening before and couldn’t have happened through quiet lobbying.
Thompson’s right that AI and government power will collide, and we need frameworks for navigating it. Right that the tension will intensify. Right that this one victory doesn’t solve the long-term problem.
But frameworks aren’t derived from first principles at cruising altitude. They’re built through concrete confrontations, public debate, and democratic friction — exactly what happened last week. And they’re built by participants who navigate with transparency and legal restraint, not by those who escalate with tweets and ultimatums.
We got our vaccine. The condition is still progressing. The real question is whether we’re smart enough to develop immunity before the actual infection arrives.


Seems like Claude is helping you spy on the truth and share it in a very effective way Chao. Your self-assigned policy of using AI only with human oversight is making it an accurate communication weapon. Smart on the decision to start us off on the slopes in Japan.