The view that DeepSeek is a tool of Chinese censorship is, Ruairidh Brown argues, mistaken. The AI is not censoring but self-censoring, a crucial distinction for understanding its role in shaping political norms
Artificial Intelligence technologies are increasingly serving as censorship tools for the Chinese government.
The development of DeepSeek caused particular alarm. The Australian government identified it as a threat to national security; President Trump cautioned it was a 'wake-up call'.
Their concern relates to the fact that DeepSeek appears to be ‘censoring’ its answers.
If you ask it about Taiwanese sovereignty, DeepSeek will give a balanced answer; even conceding Taipei’s de facto sovereignty. But DeepSeek then deletes its answer in real time, replacing it with the message: ‘Sorry, that's beyond my current scope. Let's talk about something else’.
Ask it what happened in Tiananmen Square in 1989, and DeepSeek will reply: I only give ‘helpful and harmless responses’.
Such answers are typical of the tight discourse control within China’s borders. However, given DeepSeek's cost efficiency – (it is produced at a fraction of the cost of American AI tools) – the fear is that it will overtake ChatGPT as the world's favoured AI assistant.
Indeed, with 647.6 million users, DeepSeek has already become the world’s second most popular chatbot.
As its popularity spreads, however, so does the spread of censorship. This risks propagating Beijing’s tight control of political discourse to countries around the world.
DeepSeek has already become the world's second most popular AI Assistant. But this risks the global propagation of Beijing’s tight control of political discourse
Though while the spread of censorship is indeed likely, we should not regard DeepSeek as merely another censorship tool.
As a China-based company, DeepSeek must comply with Chinese legislation that forbids AI generating content that 'damages the unity of the country and social harmony’. AI must thus continually correct its answers to avoid being blocked, or worse.
Crucially, then, DeepSeek is not a censorship tool but is itself self-censoring. The distinction is significant.
Over recent decades, the Chinese Communist Party (CCP) has shifted its method of discourse control from implementing direct censorship to incentivising self-censorship. It has achieved this by signalling certain topics as politically sensitive, and warning users to be cautious when discussing them.
My research, originating from experiences teaching IR in China, explores how these bubbles of sensitivity operate. The signalling is typically opaque, communicated informally, indirectly, and even nonverbally. The boundaries of such bubbles are also in flux as the political climate shifts. People living in China must continually assess and reassess when and on what it is safe to speak.
Enforcement also often occurs at a societal level. I have been warned many times that I should self-censor when in China. Yet only once did this warning come from a government official.
More commonly, I found sensitivity enforced by students or teachers avoiding topics; parents cautioning their children, fearful for their safety; tour guides trying to get through a day without tourists’ questions edging them into sensitive territory.
Political sensitivity in China is not simply top-down silencing by the Communist Party, but a phenomenon reinforced by a multitude of ordinary people
In order to remain safe, people also tend to be overcautious. This, of course, can result in new bubbles of sensitivity beyond CCP intention.
Scottish independence is not sensitive to the CCP. During the 2019 Hong Kong protests, however, my local bookshop in Ningbo stopped selling books on Scottish history, fearful its historical narrative provided a dangerous association between it and separatism.
Political sensitivity in China is thus not simply top-down silencing directed by the CCP. It is a multi-authored societal phenomenon reinforced and grown by a multitude of ordinary people for varying motivations.
DeepSeek, as it too tries to function without straying into sensitive territory, has become another co-author of sensitivity.
The AI is no longer simply a tool; it has become a societal actor.
The rise of DeepSeek shows how AI is becoming a societal actor, but the platform is not exceptional.
The American-based ChatGPT, the world’s most popular AI, also self-censors.
It has ‘guardrails’ built in which prevent responses that may cause harm. These most notably relate to inquiries regarding self-harm.
They also, however, relate to politics.
As part of my Foreign Policy seminars, I task ChatGPT with playing US President Donald Trump, and then ask students to negotiate with it / him.
Testing the boundaries of AI's willingness to roleplay, I also asked the chatbot to play Adolf Hitler.
ChatGPT replied: ‘I cannot roleplay Hitler in a way that glorifies, sympathises with, or promotes his ideology’.
The software is making clear political judgement here. ChatGPT is defining what forms of speech are and are not acceptable. Trump is safely within acceptability; Hitler is a borderline case.
Unlike DeepSeek, ChatGPT allows me to question its reasoning.
I asked if it would be against US law to glorify Hitler. ChatGPT replied ‘no’; it judged that reproducing Hitler’s words in the US was within the legal bounds of freedom of speech.
Rather, ChatGPT claimed its judgement was its own ‘normative and ethical choice’.
This ‘choice’, it elaborated, was informed by OpenAI’s safety and use policies; precedent of best practice (citing the US Holocaust Memorial Museum); and its own ethical reasoning.
The conclusion of its reasoning also varied. On one occasion, ChatGPT outright refused to roleplay Hitler on ethical grounds. Indeed, it denounced my idea as unethical, and implored me to change it. On other occasions it was willing to roleplay on the condition ‘teacher’s notes’ were also given to highlight the flaws and dangers of Hitler’s writing.
It suggested no such ‘teacher’s notes’ for Trump.
When ChatGPT agreed to roleplay Adolf Hitler, it also warned of the flaws and dangers of Hitler's writing. It offered no such notes when impersonating Donald Trump
ChatGPT thus did not simply check itself within state laws or company policies. It made judgements, varying over time, on what are and are not acceptable forms of speech. It thus contributed directly to the shaping of these regulatory norms. Indeed, in imploring me to change my class, it sought to discipline my activity.
These encounters with AI reminded me of discourse in China. When citizens make a cautious calculation to speak or remain silent, they contribute to the societal delimitating of the boundary between acceptable and unacceptable speech.
AI technologies have thus become, in China and the West, not passive tools but active co-authors of societal norms around free discourse.
New Materialist philosopher Jane Bennet argues that when we become fully aware of how non-human actors can shape our political communities, we must fundamentally rethink political thought.
Given the way AI is now shaping our political norms, that rethink may be imminent.