The 'Dead Internet Theory' and the rise of synthetic politics

The Dead Internet Theory, once dismissed as 'paranoid fantasy', now offers a disturbingly useful framework for understanding digital politics. Mimi Mihăilescu argues that the theory's growing credibility masks deeper questions about whether we're overestimating AI's political power while underestimating our willingness to accept technological determinism

From conspiracy to political conditioning

In an era when social media teems with endless chatter, it's increasingly difficult to tell whether the people talking are, in fact, people. Artificial intelligence is now shaping what counts as political truth. From AI-generated political memes to algorithmic news feed manipulation, infrastructures have evolved from platforms of participation into instruments of persuasion.

The Dead Internet Theory originated on fringe web forums in the late 2010s. It claimed the internet had 'died' around 2016 when AI-generated content began to dominate search results and social feeds. The theory is unsettlingly plausible. While the literal claim remains speculative, its insight that the digital public sphere is increasingly automated has proved prophetic.

The result is not just misinformation but the emergence of a synthetic public sphere in which machines simulate democratic communication. During the 2016 Brexit referendum, automated accounts drove most political social media activity. Nearly half of all online traffic came from non-human sources. In 2024, Google acknowledged its search results were inundated with websites that 'feel like they were created for search engines instead of people'.

Does AI-generated misinformation change minds – or does it merely create the illusion of influence?

But what these statistics don't reveal is whether such activity changes minds or merely creates the illusion of influence. During the 2016 US election, the Russian Internet Research Agency created thousands of fake accounts posing as American citizens, generating campaigns for Black Lives Matter and Blue Lives Matter. This was reality manipulation: creating self-sustaining feedback loops in which humans and bots co-produce entire political realities.

The dynamic has since evolved. During the 2024 electoral cycle, nefarious actors weaponised AI models like OpenAI's GPT-4 and open-source diffusion tools to produce fake political statements, fabricated visuals, and coordinated misinformation. X, TikTok and Telegram saw floods of synthetic political content during elections in the US, UK, Brazil, and the EU. Much of it was algorithmically boosted for engagement.

Selective panic and motivated reasoning

But here's the inconvenient truth: the much-anticipated and feared 'AI election apocalypse' didn’t materialise. Despite the aforementioned 2016 Russian interference, there were no measurable changes in attitudes, polarisation, or voting behaviour among those exposed to Russian X-campaigns.

In 2024 election campaigns, despite Russia's persistent interference on X, there were no measurable changes in attitudes, polarisation, or voting behaviour

Meta reported AI-generated misinformation had 'modest and limited' impact in 2024. Several others suggested that fears of AI-enabled disinformation appear to have been overblown and far from the apocalyptic predictions. The gap between predicted catastrophe and measurable impact suggests we’re being 'deepfaked by deepfakes': media coverage and public anxiety may be disproportionate to actual effects. When Sam Altman, OpenAI's CEO, voiced concerns about Dead Internet Theory, the irony wasn't lost. The person whose company enabled mass AI generation was warning about its consequences.

Yet, this doesn't mean AI-generated misinformation is harmless. It merely means we may be overestimating its direct persuasive power, and underestimating its capacity to corrode institutional trust. Trump supporters circulated hyperrealistic images of Black Americans wearing Trump Won shirts. They claimed this visual 'evidence' of demographic support contradicted polling data. In Brazil, WhatsApp networks spread fabricated photographs of opposition leaders in compromising situations. In Europe, social media users shared deepfaked images of migrants to stoke anti-immigration sentiment.

So, the real threat isn't that AI will swing elections but that it erodes the epistemic foundations of democratic deliberation itself. It is a 'liar’s dividend', which allows us to dismiss authentic content as synthetic.

Habermas couldn't have predicted this

Political theorist Jürgen Habermas defined democracy as deliberation among rational citizens grounded in shared reality. That foundation no longer holds. When voters cannot discern whether the content they consume – or even the interlocutor they debate – is human, the very notion of a 'public sphere' collapses. This system creates manufactured consensus.

AI-generated posts receive inflated engagement from bot networks, which in turn triggers platform algorithms to prioritise them. The result is a self-reinforcing loop in which synthetic signals create the illusion of widespread political support. Since Elon Musk's acquisition and rebranding to X, there has been a surge in AI-generated memes and misinformation, often amplified by algorithmic changes that deprioritise fact-checking. Scholars describe this as algorithmic authoritarianism: governance through the invisible control of attention.

AI-generated campaigns result in a self-reinforcing loop in which synthetic signals create the illusion of widespread political support

Perhaps the Dead Internet's real lesson isn't that AI killed democracy but that democratic ideals of transparent, rational public discourse were always aspirational fictions.

Reclaiming the digital world

Current responses, such as AI watermarking, content labelling, and automated fact-checking treat AI-generated misinformation as a technical problem requiring technical solutions. This fundamentally misdiagnoses the challenge. Indeed, it rests on a flawed assumption: that misinformation is an anomaly within otherwise neutral systems. In truth, the platforms themselves, commercial systems optimised for engagement, are structurally incompatible with democratic discourse. As long as engagement optimisation defines success, the logic of polarisation will prevail.

But accepting Dead Internet Theory's most dire implications means conceding defeat before understanding the battlefield. We’re operating with insufficient data and enormous research gaps, measuring the wrong things, and drawing conclusions prematurely.

Democracy's challenge isn't distinguishing real from synthetic, but building institutions capable of governing commercial platforms whose business model depends on polarisation. If we cannot imagine democratic information sovereignty, treating digital platforms as public utilities subject to democratic oversight, we risk creating what Dead Internet theorists fear most: an elaborate simulation of democracy itself.

The internet may not be dead. But our capacity to govern it democratically remains dangerously underdeveloped.

This article presents the views of the author(s) and not necessarily those of the ECPR or the Editors of The Loop.

Author

photograph of Mimi Mihăilescu
Mimi Mihăilescu
Behavioural Risk Researcher

Mimi has a PhD from the University of Bath.

With a background in political studies, sociology, and communication studies, Mimi explores how digital culture shapes the way we think, behave, and engage with the world around us.

Her research dives deep into the messy, fascinating spaces where internet culture and politics collide, from memes and online activism to the hidden dynamics of social media communities.

Mimi is interested in how digital spaces amplify human behaviour, influence collective decision-making, and blur the lines between the personal and the political.

Through her work, Mimi sheds light on what our online habits reveal about risk, identity, and power in a hyperconnected world.

Read more articles by this author

Share Article

Republish Article

We believe in the free flow of information Republish our articles for free, online or in print, under a Creative Commons license.

Creative Commons License

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

The Loop

Cutting-edge analysis showcasing the work of the political science discipline at its best.
Read more
THE EUROPEAN CONSORTIUM FOR POLITICAL RESEARCH
Advancing Political Science
© 2025 European Consortium for Political Research. The ECPR is a charitable incorporated organisation (CIO) number 1167403 ECPR, Harbour House, 6-8 Hythe Quay, Colchester, CO2 8JF, United Kingdom.
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram