Anthropic and the Pentagon: Ideology, Exceptionalism, and the Future
In today's political environment, characterised by overlapping, complex problems, including but not limited to the entangled relationship between states and private corporations, especially in the AI industry, ideology has become not merely relevant but urgent to understand. In a climate where each global, complex problem threatens human extinction, or at least our way of life, what is increasingly sustaining collective and individual optimism is blind faith in another unknown: ideology itself.
Growing economic inequality, the erosion of trust in democratic institutions, AI-mediated information environments, and declining media literacy have together produced a historically distinctive accumulation of ideological stress — often so granular that it varies significantly from person to person and from issue to issue. Indeed, that divergence of ideological belonging is true even at the highest levels of influence, among both states and private corporations.
On Tuesday, the 24th of February, U.S. Secretary of Defense Pete Hegseth issued an ultimatum demanding unrestricted access to Anthropic's AI technology on terms Anthropic found unconscionable. Hegseth threatened to cancel the company's $200 million Pentagon contract, designate Anthropic a "supply chain risk," or invoke the Defense Production Act to compel compliance. Anthropic published its response on Thursday, the 26th, a day before the 5:01pm Friday deadline. CEO Dario Amodei's position was unambiguous: "These threats do not change our position: we cannot in good conscience accede to their request." On the specific demands around autonomous weapons and mass surveillance, Amodei argued those use cases are "simply outside the bounds of what today's technology can safely and reliably do," adding that they "have never been included in our contracts with the Department of War, and we believe they should not be included now." A day before the deadline, Anthropic had formally rejected the Pentagon's terms. The precedent being set is the most consequential AI governance moment since the EU AI Act.
To understand why this matters structurally, one must begin with what AI firms actually are. Kate Crawford, in Atlas of AI (2021), argues that AI systems are not neutral tools but artefacts of specific industrial and political configurations. Their deployment, therefore, reflects and reinforces existing power distributions. By absorbing independent private systems into public state infrastructure without adequate governance, a convergence of interests occurs at a scale that, shaped by political bias, risks making AI tools into state weapons rather than instruments for public benefit. This has been a concern for AI regulators for years, and it remains unresolved because the regulatory frameworks to address it do not yet exist.
At the core of Anthropic's position is the view that the Trump administration is an unreliable custodian of AI military and surveillance technologies, and that the firm must therefore impose independent guardrails to prevent the Pentagon and other agencies from potential misuse. Without such guardrails (legally mandated, independently enforced), the AI firms risk any legal standing to impose use restrictions on state actors. The consequence: inadequately regulated use of AI as a weapons manufacturer.
That said, Anthropic is not without its own contradictions. On the same Tuesday as the Hegseth meeting, the company published a radically overhauled version of its Responsible Scaling Policy (RSP), effectively abandoning its 2023 pledge never to release an AI model unless it could guarantee adequate safety measures in advance. Its new position holds that if one AI firm paused while others moved forward without strong mitigations, "the developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit". This logic is one of the ideological ‘competitive accelerationist’ arguments. A kind of technological messianism that insists we must achieve the next development before someone less responsible does. It is an ideology that, frankly, justifies almost anything. And yet, paradoxically, it is also that same ideology which produced a company capable of saying no to the Department of Defense.
The broader geopolitical context underscores the importance of each firm's character. That the leading state in global AI production is actively encouraging deregulation and coercing private firms into compliance with military demands sends a unilateral signal to the rest of the world: this is the new reality. It is an extension of explicit U.S. exceptionalism, which has long justified American foreign policy under the Trump administration. When facts are contested, when empirical consensus fractures along ideological lines, ideology substitutes for evidence as a sorting mechanism. The ideology of powerful states and firms determines whose reality and whose policy ambitions prevail.
The Trump administration has shown itself willing to treat ideological noncompliance as structurally equivalent to a foreign security threat, threatening Anthropic with designation under instruments such as the 1950 Defense Production Act, historically reserved for situations in which a foreign actor threatens the state. The academic literature on democratic backsliding is instructive here. Levitsky and Ziblatt identify the treatment of domestic institutional resistance as an existential security threat as a defining characteristic of the late stages of democratic erosion. What is happening to Anthropic is a case study in that process, with historically unprecedented consequences.
The Pentagon's recourse to a decades-old industrial mobilisation statute is itself a symptom of governance failure. Because no comprehensive legal framework governs AI deployment in military contexts, the state is forced to operate through contract coercion, while Anthropic relies on contractual guardrails. Neither mechanism is adequate. This confrontation has laid bare what regulatory theorists have long warned. In the absence of truly purposeful legislation for AI operations, governments resort to blunt instruments, while firms substitute private governance for public law. The result is that a dispute of constitutional significance is being adjudicated through a procurement contract, allowing another firm to do what Anthropic refused to within days.
Now that Anthropic has refused to concede, the rupture scenario the academic literature predicted is materialising. A bifurcated AI defence market is taking shape: those who comply gain classified contracts and state patronage; those who do not are excluded. Anthropic now stands alone outside the classified ecosystem. When the state ideologically screens its technology partners in this way, effective independent oversight of AI becomes structurally impossible. Any firm seeking to exercise genuine governance over its systems faces exclusion from the market that most concentrates state power.
Underlying all of this is the ideological formation that defines AI leadership culture: technological messianism (the belief that AI represents a civilisational phase transition) combined with market fundamentalism (which holds that competitive markets, unimpeded by regulation, will produce optimal outcomes). Together, these beliefs justify regulatory resistance and legitimate the acceleration of capabilities that no existing governance framework can adequately contain. It is precisely this empirical commitment to ideology, in service of each actor's respective "solution" to the cluster of complex problems that define our current global environment, that constitutes the existential threat.
The clash between Anthropic and the U.S. Department of Defense is, at its core, a political question about the distribution of power between states, corporations, and citizens over dual-use technologies with no historical precedent. If another firm agrees to the Pentagon's terms — and several already have — the democratic institutions designed to constrain state power over surveillance and autonomous violence will have lost one of the few remaining structural checks. We are at a critical juncture: to surrender to a future decided by ideology, or to insist on one constrained by history, law, and democratic accountability.