Digital resilience in the age of synthetic media

New technologies demand a shift toward a broader framework of digital resilience. Misinformation threatens to deepen inequality and fragment access to common knowledge. James Rice argues that digital resilience depends upon strategic interventions spanning digital infrastructure, international institutions, and citizen psychology

In May 2023, a fabricated image depicting an explosion near the Pentagon briefly went viral on social media. The post caused a measurable dip in US stock markets before users eventually debunked it. The image was not particularly sophisticated by today’s standards. Yet it revealed that even crude synthetic media can trigger real-world consequences by exploiting vulnerabilities in existing information ecosystems.

As generative artificial intelligence (AI) systems become more powerful, faster, and cheaper, society faces ever deeper challenges in safeguarding democratic discourse, promoting economic stability, and maintaining public trust.

Evidence from across the research community points to the need for a new, empirical model of the causes and impacts of digital resilience, derived from theories of political and epistemic networks. Existing models of AI’s impact on information access are interdependent, connecting international media infrastructures.

The limits of detection

Early policy responses to synthetic media emphasised the importance of identifying whether an actor manipulated a significant or pivotal online argument. While misinformation detection remains important, research increasingly shows that detection is not a standalone solution. Moderation systems increasingly lag behind the power of generative models, and accuracy therefore degrades as AI models improve.

Moreover, detection places the burden of verification on platforms or end-users after content has already spread. Economic and political studies in information diffusion demonstrate that false or emotionally charged content travels faster and farther than corrections. This is especially true in politically polarised environments. By the time detection tools flag synthetic media, the damage may already be done.

This asymmetry has led researchers to argue for a shift from policing what counts as scientific truth, and who can share it, toward provenance and accountability. Modern methods should therefore emphasise the origins of content, not whether it is true in some absolute sense.

Digital provenance

One method for establishing provenance is embedding cryptographic metadata that records how and when digital content was created or modified. Coalition for Content Provenance and Authenticity (C2PA) standards are now supported by major firms including Adobe, Microsoft, and OpenAI. C2PA aims to make such metadata interoperable across platforms — a shared language for authenticity.

To deter misinformation, provenance must sit alongside digital governance mechanisms, such as standards, liability rules, and auditing requirements

Some media literacy organisations have suggested that news and content provenance systems, while imperfect, when presented clearly significantly improve users’ ability to distinguish authentic from synthetic content. On the other hand, recent research shows that this effect attenuates when AI alone performs fact-checking. Unlike post-hoc detection, provenance provides a verifiable chain of digital content identification from its inception to its dissemination.

Still, provenance systems face challenges. To be successful, consumption of true content needs psychological and social incentives from platforms and creators. As a result, scholars increasingly argue that provenance must sit alongside digital governance mechanisms, such as standards, liability rules, and auditing requirements, to be effective at deterring misinformation and disinformation at scale.

Institutional resilience and governance

Governments have begun to respond, regulating the proliferation of online misinformation. The European Union’s Digital Services Act (DSA) and AI Act together represent the most comprehensive attempt to manage systemic information risk. These tools impose transparency, risk-assessment, and mitigation obligations on large platforms and AI providers. They focus on implementing scalable rules to guide how platforms amplify, monetise, and promote information.

The broader goal of governments' AI regulation is to increase cooperation and collaboration between tech firms, regulators, and the public

Complementing the EU’s regulation, initiatives such as the US National Institute of Standards and Technology’s (NIST) AI Risk Management Framework and 'deepfake' evaluation programmes are helping to standardise how AI-generated science and media is verified. The larger goal of these efforts is to increase cooperation and collaboration between tech firms, regulators, and the public. Ultimately, this AI regulation aims to sow the seeds for the future of the global order in technocratic democracies.

Digital resilience also depends on the global landscape of cognitive, social, and cultural attributes. These features of our societies affect the ways political agents interpret, share, and respond to information.

Prebunking

A growing body of research suggests that prebunking — exposing people to weakened examples of misinformation before they encounter the real thing (a 'vaccine' against misinformation) — can significantly improve resistance to online manipulation. Large-scale randomised experiments show that targeted interventions can reduce susceptibility to misinformation across ideological groups, and that these effects persist over time.

Unlike fact-checking, which is corrective and reactive, prebunking is preventative. Prebunking holds that misinformation is a predictable social phenomenon which we can solve through scientifically informed psychological intervention. These kinds of interventions scale well across online communities, and complement the efforts of technologists and governments to stem the tide of weaponised misinformation.

'Vaccinating' people against misinformation can improve resistance to online manipulation

As an example, during recent elections, Taiwan combined rapid fact-checking with public education, platform cooperation, and civic engagement to blunt the impact of coordinated disinformation campaigns. Analysts emphasise that Taiwan succeeded thanks to the well-timed impact of institutional action on an already digitally literate public.

How to achieve social resilience

Misinformation threatens a near future of dystopian public deception through advanced technology. Robust societal resilience, combined with government regulation, can slow the spread of harmful and extreme content, minimising existing social weaknesses by preventing misinformation at its source.

As general-purpose AI continues to evolve, society’s goal should not be to eliminate synthetic media and revert to a pre-technological state. That would be an impossible and undesirable task and, in any case, technology is here to stay. Rather, our goal should be to ensure that communities can absorb rapid non-linear and exponential shocks without losing essential trust in shared conceptions of truth and reality.

Resilience in the digital age is an essential property of strong epistemic foundations of social order. It is not the effect of any single policy lever. The only credible path forward depends on widespread public awareness of AI capabilities and risks. Our collective trajectory demands robust and proactive policymaking through skilled use of new technologies at every level of science and society.

This article presents the views of the author(s) and not necessarily those of the ECPR or the Editors of The Loop.

Author

photograph of James Rice
James Rice
PhD Student, Department of Government, University of Essex

James holds two MSc degrees, both in political and social philosophy, from the University of Edinburgh and the London School of Economics and Political Science.

He has previously published an essay on climate justice in the Cambridge Journal for Climate Research, and has written for a variety of blogs at the LSE.

James's current research interests include environmental politics, quantitative methods, AI and LLMs, online misinformation, and the philosophy of science.

Personal website

LinkedIn

Read more articles by this author

Share Article

Republish Article

We believe in the free flow of information Republish our articles for free, online or in print, under a Creative Commons license.

Creative Commons License

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

The Loop

Cutting-edge analysis showcasing the work of the political science discipline at its best.
Read more
THE EUROPEAN CONSORTIUM FOR POLITICAL RESEARCH
Advancing Political Science
© 2026 European Consortium for Political Research. The ECPR is a charitable incorporated organisation (CIO) number 1167403 ECPR, Harbour House, 6-8 Hythe Quay, Colchester, CO2 8JF, United Kingdom.
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram