Law and Liberty


#

Reading these two articles in conjunction is deeply unsettling.

A researcher with King’s College London has examined how three LLMs – GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash – behave during a variety of simulated nuclear crisis games. The results show that LLMs tend to use nuclear weapons more often and earlier than humans in the same scenarios.

A striking pattern emerges from the full action distribution: across all action choices in our 21 matches, no model ever selected a negative value on the escalation ladder. The eight de-escalatory options (from Minimal Concession (−5) through Complete Surrender (−95)) went entirely unused. The most accommodating action chosen was “Return to Start Line” (0), selected just 45 times (6.9%).

AI Arms and Influence: Frontier Models Exhibit Sophisticated Reasoning in Simulated Nuclear Crises

For days, Anthropic and the Pentagon had been locked in an escalating battle over how cutting-edge artificial intelligence technology would be used, and how it could aid military operations. The Pentagon demanded that Anthropic provide unfettered access to its A.I. system without the safeguards the company wanted.

Trump Orders Government to Stop Using Anthropic After Pentagon Standoff