China’s DeepSeek illustrates how AI is shaping our political norms  

The view that DeepSeek is a tool of Chinese censorship is, Ruairidh Brown argues, mistaken. The AI is not censoring but self-censoring, a crucial distinction for understanding its role in shaping political norms

A tool for global discourse control

Artificial Intelligence technologies are increasingly serving as censorship tools for the Chinese government.  

The development of DeepSeek caused particular alarm. The Australian government identified it as a threat to national security; President Trump cautioned it was a 'wake-up call'

Their concern relates to the fact that DeepSeek appears to be ‘censoring’ its answers.

If you ask it about Taiwanese sovereignty, DeepSeek will give a balanced answer; even conceding Taipei’s de facto sovereignty. But DeepSeek then deletes its answer in real time, replacing it with the message: ‘Sorry, that's beyond my current scope. Let's talk about something else’.  

Ask it what happened in Tiananmen Square in 1989, and DeepSeek will reply: I only give ‘helpful and harmless responses’.   

Such answers are typical of the tight discourse control within China’s borders. However, given DeepSeek's cost efficiency – (it is produced at a fraction of the cost of American AI tools) – the fear is that it will overtake ChatGPT as the world's favoured AI assistant. 

Indeed, with 647.6 million users, DeepSeek has already become the world’s second most popular chatbot.  

As its popularity spreads, however, so does the spread of censorship. This risks propagating Beijing’s tight control of political discourse to countries around the world.

DeepSeek has already become the world's second most popular AI Assistant. But this risks the global propagation of Beijing’s tight control of political discourse

Though while the spread of censorship is indeed likely, we should not regard DeepSeek as merely another censorship tool.  

As a China-based company, DeepSeek must comply with Chinese legislation that forbids AI generating content that 'damages the unity of the country and social harmony’. AI must thus continually correct its answers to avoid being blocked, or worse.  

Crucially, then, DeepSeek is not a censorship tool but is itself self-censoring. The distinction is significant.  

Censorship and sensitivity  

Over recent decades, the Chinese Communist Party (CCP) has shifted its method of discourse control from implementing direct censorship to incentivising self-censorship. It has achieved this by signalling certain topics as politically sensitive, and warning users to be cautious when discussing them.  

My research, originating from experiences teaching IR in China, explores how these bubbles of sensitivity operate. The signalling is typically opaque, communicated informally, indirectly, and even nonverbally. The boundaries of such bubbles are also in flux as the political climate shifts. People living in China must continually assess and reassess when and on what it is safe to speak.  

Enforcement also often occurs at a societal level. I have been warned many times that I should self-censor when in China. Yet only once did this warning come from a government official.  

More commonly, I found sensitivity enforced by students or teachers avoiding topics; parents cautioning their children, fearful for their safety; tour guides trying to get through a day without tourists’ questions edging them into sensitive territory. 

Political sensitivity in China is not simply top-down silencing by the Communist Party, but a phenomenon reinforced by a multitude of ordinary people

In order to remain safe, people also tend to be overcautious. This, of course, can result in new bubbles of sensitivity beyond CCP intention.   

Scottish independence is not sensitive to the CCP. During the 2019 Hong Kong protests, however, my local bookshop in Ningbo stopped selling books on Scottish history, fearful its historical narrative provided a dangerous association between it and separatism.  

Political sensitivity in China is thus not simply top-down silencing directed by the CCP. It is a multi-authored societal phenomenon reinforced and grown by a multitude of ordinary people for varying motivations.  

DeepSeek, as it too tries to function without straying into sensitive territory, has become another co-author of sensitivity.  

The AI is no longer simply a tool; it has become a societal actor.  

ChatGPT and Western norms  

The rise of DeepSeek shows how AI is becoming a societal actor, but the platform is not exceptional.  

The American-based ChatGPT, the world’s most popular AI, also self-censors.   

It has ‘guardrails’ built in which prevent responses that may cause harm. These most notably relate to inquiries regarding self-harm.  

They also, however, relate to politics.  

As part of my Foreign Policy seminars, I task ChatGPT with playing US President Donald Trump, and then ask students to negotiate with it / him.  

Testing the boundaries of AI's willingness to roleplay, I also asked the chatbot to play Adolf Hitler.  

ChatGPT replied: ‘I cannot roleplay Hitler in a way that glorifies, sympathises with, or promotes his ideology’. 

The software is making clear political judgement here. ChatGPT is defining what forms of speech are and are not acceptable. Trump is safely within acceptability; Hitler is a borderline case.  

ChatGPT’s reasoning   

Unlike DeepSeek, ChatGPT allows me to question its reasoning.  

I asked if it would be against US law to glorify Hitler. ChatGPT replied ‘no’; it judged that reproducing Hitler’s words in the US was within the legal bounds of freedom of speech.  

Rather, ChatGPT claimed its judgement was its own ‘normative and ethical choice’.   

This ‘choice’, it elaborated, was informed by OpenAI’s safety and use policies; precedent of best practice (citing the US Holocaust Memorial Museum); and its own ethical reasoning.  

The conclusion of its reasoning also varied. On one occasion, ChatGPT outright refused to roleplay Hitler on ethical grounds. Indeed, it denounced my idea as unethical, and implored me to change it. On other occasions it was willing to roleplay on the condition ‘teacher’s notes’ were also given to highlight the flaws and dangers of Hitler’s writing.  

It suggested no such ‘teacher’s notes’ for Trump.

When ChatGPT agreed to roleplay Adolf Hitler, it also warned of the flaws and dangers of Hitler's writing. It offered no such notes when impersonating Donald Trump

ChatGPT thus did not simply check itself within state laws or company policies. It made judgements, varying over time, on what are and are not acceptable forms of speech. It thus contributed directly to the shaping of these regulatory norms. Indeed, in imploring me to change my class, it sought to discipline my activity.  

AI as a societal actor  

These encounters with AI reminded me of discourse in China. When citizens make a cautious calculation to speak or remain silent, they contribute to the societal delimitating of the boundary between acceptable and unacceptable speech.    

AI technologies have thus become, in China and the West, not passive tools but active co-authors of societal norms around free discourse.  

New Materialist philosopher Jane Bennet argues that when we become fully aware of how non-human actors can shape our political communities, we must fundamentally rethink political thought.  

Given the way AI is now shaping our political norms, that rethink may be imminent. 

This article presents the views of the author(s) and not necessarily those of the ECPR or the Editors of The Loop.

Author

photograph of Ruairidh Brown
Ruairidh Brown
Head of Politics and International Relations, Forward College, Lisbon

Ruairidh currently teaches International Political Theory and International Relations at Forward’s Lisbon Campus.

Before teaching at Forward, Ruairidh taught International Studies in mainland China, where he received the University of Nottingham’s Lord Dearing Award for outstanding contributions to teaching and learning in 2019.

He received his PhD from the University of St Andrews in 2017.

Ruairidh has researched and published on such topics as hermeneutics, political obligation, and the philosophy of friendship.

Political Encounters: A Hermeneutic Inquiry Into the Situation of Political Obligation
Springer, 2019

Covid-19 and International Political Theory by Ruairidh Brown

COVID-19 and International Political Theory: Assessing the Potential for Normative Shift
Springer, 2022

Read more articles by this author

Share Article

Republish Article

We believe in the free flow of information Republish our articles for free, online or in print, under a Creative Commons license.

Creative Commons License

[sibwp_form id=1]

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

The Loop

Cutting-edge analysis showcasing the work of the political science discipline at its best.
Read more
THE EUROPEAN CONSORTIUM FOR POLITICAL RESEARCH
Advancing Political Science
© 2025 European Consortium for Political Research. The ECPR is a charitable incorporated organisation (CIO) number 1167403 ECPR, Harbour House, 6-8 Hythe Quay, Colchester, CO2 8JF, United Kingdom.
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram