Hegseth threatens to cancel Anthropic's $200 million contract over "woke AI" concerns
Context:
A Pentagon contract standoff centers on Anthropic’s safety posture and the push to broaden allowable uses of its AI. Defense Secretary Pete Hegseth threatened to terminate Anthropic’s $200 million agreement unless the company relaxes safety standards, signaling a clash over whether AI can be employed for surveillance or warfare. Anthropic’s CEO Dario Amodei has argued that such domestic surveillance and weaponization cross ethical lines and risk abuse, while officials in the administration press for “lawful” use across scenarios. The dispute underscores a broader debate about AI safety versus strategic leverage in defense contracts, with potential escalation via authorities like the Defense Production Act. The outcome will shape both defense procurement and the boundaries of AI safety commitments moving forward.
Dive Deeper:
Defense Secretary Pete Hegseth delivered a warning that Anthropic’s $200 million Defense Department contract could be canceled by Friday unless safety standards are loosened, following a meeting with Anthropic CEO Dario Amodei.
Amodei has repeatedly resisted applying AI to domestic mass surveillance and AI-directed weapons, labeling such uses as illegitimate and prone to abuse, a stance echoed in discussions with Hegseth that emphasized maintaining ethical boundaries.
Hegseth reportedly threatened to invoke the Defense Production Act or designate Anthropic a 'supply chain risk' if the company does not align with calls for broader lawful-use permissions, a lever often reserved for critical national security needs.
The term 'woke AI' has been used by Hegseth and other Trump administration officials to describe Anthropic’s safety constraints, a label critics say is imprecise and tied to safety protections rather than objective bias in models.
White House AI czar David Sacks helped draft an executive order last year targeting tech firms over safety-related concerns, highlighting a political dimension to the dispute and how safety policies are framed at the highest levels.
OpenAI, Google, and Elon Musk’s xAI have publicly embraced broader lawful-use scenarios, contrasting with Anthropic’s cautious stance and illustrating divergent industry approaches amid the administration’s safety-centric agenda.
Despite the tension, authorities last summer granted Anthropic the contract, reflecting a judgment that its model offered advanced capabilities and security for sensitive military applications.