How AI is becoming a core instrument of state power

AI companies often present their technologies as politically neutral. But as frontier models become intertwined with national security strategies, neutrality is giving way to a new reality. AI, warns Elif Davutoğlu, has now become an instrument of state power

For years, leading AI companies cultivated the image of political neutrality. They presented themselves as global research labs, mission-driven innovators, or custodians of a technology meant to benefit humanity. Governments were regulators and occasional partners, not the core of activity or primary organising force.

That posture is becoming increasingly difficult to maintain.

The deepening cooperation between frontier AI firms and the United States Department of Defense, for example, signals a structural shift. Many governments no longer treat frontier AI primarily as a commercial innovation requiring regulation. Increasingly, they regard it as a strategic capability that needs to be secured.

The Trump administration's revived 'War Department' captures this transformation not as institutional renaming, but as a governing logic. The technological frontier is once again being organised around geopolitical competition.

Rethinking AI neutrality

AI neutrality was never purely descriptive. It was also a political positioning.

Companies such as OpenAI and Anthropic framed their missions in universalist terms. Their rhetoric emphasised global safety, existential risk mitigation, and benefits for humanity at large.

The early AI research environment prized openness and transnational collaboration. This allowed frontier labs to operate as supposedly neutral global actors in a world of sovereign states. It reassured publics that AI development would remain within civilian governance frameworks rather than becoming subsumed under military imperatives.

AI firms used to emphasise global safety and existential risk mitigation. But neutrality depends on insulation from security logic, and that insulation is now eroding

But neutrality depends on insulation from security logic. That insulation is eroding.

Structural integration, not formal nationalisation

The United States has not nationalised frontier AI firms. Instead, it has reclassified their outputs.

Large language models and advanced generative systems are widely recognised as dual-use technologies. Their potential applications include intelligence analysis, cybersecurity, logistics optimisation, autonomous systems coordination, and strategic simulation. Even when deployed for civilian purposes, their architecture renders them adaptable to defence contexts.

As frontier models are evaluated, tested, or integrated into defence-related environments, their institutional status changes. They cease to be merely commercial tools and become elements of state capability.

The integration is subtle but consequential:

  • Defence innovation units experiment with frontier models.
  • Federal agencies seek early access to advanced systems.
  • Security officials consult AI executives on model risks and strategic competition.
  • Procurement frameworks evolve to accommodate rapidly scaling AI products.

Individually, these developments may not resemble dramatic militarisation. Yet taken together, they gradually embed frontier AI within the national security ecosystem.

Neutrality becomes increasingly difficult to sustain not because companies abandon their mission statements, but because their capabilities become strategically indispensable.

Compute, export controls, and sovereign boundaries

Neutrality is also challenged at the infrastructural level.

Frontier AI development depends on advanced semiconductor supply chains, hyperscale data centres, and vast computational resources. The US government has imposed export controls on high-end chips and related technologies, effectively delineating a geopolitical boundary around AI scaling capacity.

When AI firms align their growth strategies with these controls, they operate within a national strategic perimeter. Access to compute becomes a matter of sovereign concern. Scaling decisions become inseparable from foreign policy.

In this environment, frontier AI is no longer a globally fluid commodity. It is a resource situated within state-defined limits.

The 'War Department' logic operates here not through direct ownership, but through control of the material conditions of possibility.

Safety, security, and convergence

An additional transformation lies in the convergence of safety and security discourse.

Companies such as Anthropic emphasise alignment, constitutional AI, and guardrails against misuse. At the same time, federal institutions prioritise reliability, adversarial robustness, and protection against catastrophic risks.

The language differs, but the objectives overlap.

The boundaries between ethical governance and strategic control are narrowing as frontier AI firms increasingly operate in dialogue with security institutions

As safety frameworks become legible as instruments of national resilience, the boundary between ethical governance and strategic control narrows. Frontier AI firms increasingly operate in dialogue with security institutions, even when formal contracts remain limited.

This convergence does not imply subordination. It indicates structural embedding.

The democratic dilemma

The erosion of AI neutrality raises a political question.

Security integration tends to involve executive coordination, classified evaluation processes, and reduced transparency. Democratic governance depends on deliberation, oversight, and public accountability.

If frontier AI becomes entrenched within national security frameworks before comprehensive civilian governance mechanisms mature, strategic imperatives may shape technological trajectories in ways that are difficult to contest publicly.

Can a general-purpose cognitive infrastructure that permeates civilian life be governed primarily through security logic without narrowing democratic space?

The issue is not whether AI should serve defence purposes. The issue is whether a general-purpose cognitive infrastructure, one that permeates civilian life, can be governed primarily through security logic without narrowing democratic space.

Beyond alarmism

Invoking the 'War Department' doesn't mean the United States has formally abandoned civilian oversight. Rather, government spokespeople use the expression as analytical shorthand for what appears to be a shift in organising principle.

The commanding heights of frontier AI – computation, models, deployment pathways – are increasingly situated within geopolitical rivalry. When a technology becomes central to state power, states may find neutrality structurally unsustainable. Amid strategic competition, AI neutrality is thus becoming increasingly difficult to sustain.

But what will replace AI neutrality? This is a pressing question. It could be unexamined securitisation, or a consciously designed framework that reconciles strategic necessity with democratic accountability.

If frontier AI is increasingly becoming part of state capability, those who govern it may no longer be able to rely on assumptions of neutrality. Navigating this new frontier will require careful political assessment and reckoning.

This article presents the views of the author(s) and not necessarily those of the ECPR or the Editors of The Loop.

Author

photograph of Elif Davutoğlu
Elif Davutoğlu
Lecturer, TED University, Ankara

Elif is an interdisciplinary policy researcher specialising in the comparative governance of artificial intelligence.

Her work explores algorithmic governance, political regimes, and the societal and ethical implications of emerging technologies.

Her recent research compares US and Chinese AI policy using computational text analysis.

She has also published on AI’s role in public administration and its impact on power dynamics in political systems.

@ElifDavutoglu_

ORCiD

Read more articles by this author

Share Article

Republish Article

We believe in the free flow of information Republish our articles for free, online or in print, under a Creative Commons license.

Creative Commons License

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

The Loop

Cutting-edge analysis showcasing the work of the political science discipline at its best.
Read more
THE EUROPEAN CONSORTIUM FOR POLITICAL RESEARCH
Advancing Political Science
© 2026 European Consortium for Political Research. The ECPR is a charitable incorporated organisation (CIO) number 1167403 ECPR, Harbour House, 6-8 Hythe Quay, Colchester, CO2 8JF, United Kingdom.
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram