The 2024 Olympics marked a significant moment in the growing intrusion of AI surveillance into public life. Giulia Dal Bello, Sivan Hirsch-Hoefler and Daphna Canetti argue that, despite the security advantages, governments need to account for public perceptions of surveillance, as negative views may fuel collective action against state authority
The Paris Olympics showcased world-class sport, but they also revealed an unprecedented deployment of AI-driven surveillance technology. While the eyes of the world were on the Games, the spectators were not the only ones watching. The French authorities' high-tech security plan included approximately 300 AI cameras to scan the thousands of spectators and athletes.
This heightened security response stems from previous tragic events such as the siege and subsequent killing of 11 Israeli athletes at the 1972 Munich Games and the detonation of a pipe bomb at Atlanta's Centennial Park during the 1996 Olympics. Security was further intensified following 9/11. Today, the Olympic Games are one of the world's most significant security operations outside of wartime.
In Paris, AI surveillance seemed like a necessary protective step, its potential impact on privacy is cause for concern.
In preparation for the Games, last year France enacted Law No. 2023-380, a package of laws providing a legal framework for the 2024 Olympics. This included the controversial Article 7, which allows French law enforcement and its tech contractors to experiment with intelligent video surveillance before, during, and after the Games. The software used to analyse video streams for potential threats in public spaces flags eight different categories, including abnormally large crowds, abandoned objects, and the presence or use of weapons.
Under a new law, French law enforcement and its tech contractors could experiment with intelligent video surveillance before, during, and after the Olympics
Despite the need to protect civilians from terrorist attacks, the implementation of AI-powered surveillance raises concerns about privacy and legality arising from how countries operate and utilise these systems. What data must the systems collect and analyse to identify potential threats? What happens to the data once collected, and who can access it? Regulations provide little transparency to answer these questions. While there are safeguards in place to prevent access to biometric data that can identify individuals, the training data may include such information, and the systems could be adjusted to use it.
As global security ramps up, especially during high-profile events like the Olympics, the debate over AI surveillance grows fiercer. Nations are doubling down on video surveillance to keep citizens safe, but at what cost? Some municipalities have installed public cameras that can see into our private lives and gadgets capable of recording us anytime, anywhere. The line between protection and intrusion is razor-thin. The fog surrounding state-run surveillance thickens, with little transparency on how our data is collected, stored, or used.
Nations are doubling down on video surveillance to keep citizens safe, but there is little transparency on how our data is collected, stored, or used
This murkiness fuels criticism, particularly in cyber conflict and digital terrorism. The Palestinian militant group Hamas exposed the dark side of surveillance tech during a recent terrorist attack in Israel. In a chilling turn, Hamas exploited hacked security cameras to gather intel and execute its plan. These unsettling cases – state and terrorist use of cameras – shine a harsh light on the double-edged sword of surveillance in the digital era, where the very tools meant to protect us can also be turned against us.
How do citizens perceive digital state surveillance in the context of counterterrorism? Direct and indirect exposure to terrorist attacks raises individual perception of threat and, therefore, support for state surveillance in a desperate attempt to gain security. Particularly, both cyber and kinetic terrorism – even when non-lethal – exacerbate personal insecurity. As a consequence of counterterrorism, individuals may adopt increasingly stringent political views and support strong domestic and foreign security policies. However, citizens may also feel threatened by surveillance itself or they may react differently to different types of surveillance use by the state.
At the Political Psychology Lab at the University of Haifa, led by Daphna Canetti, researchers are investigating individual responses to exposure to different types of state surveillance in the context of terrorism. Their research has shown that the psychological mechanisms triggered by exposure to surveillance are crucial for subsequent political behaviour. Interestingly, negative emotions, such as hatred and anger, are significant motivators in this respect.
To better understand how psychological mechanisms influence political action, recent studies employing neuroscience research techniques, such as functional brain imaging (fMRI), have shown that partisan differences are related to differences in brain activation. Individuals with differing political views experience events differently, even at the neural level.
The primary motives and emotions that prompt individuals to engage in different types of political behaviour, particularly concerning security issues, remain a topic of ongoing debate
Building on these findings, Daphna Canetti's lab conducts innovative studies using fMRI to investigate brain responses to specific political behaviours. The lab is particularly interested in how people psychologically and neurologically perceive different contexts and types of surveillance, and how these perceptions translate into collective actions against the government. As we have seen with the recent protests against the Israeli government, collective action is a crucial means for citizens to express dissent. However, the primary motives and emotions – anger, hatred, hope – that prompt individuals to engage in different types of political behaviour, particularly concerning security issues, remain a topic of ongoing debate.
The use of advanced technology like AI at the 2024 Olympic Games raises important questions about how we balance safety and privacy. These tools can make us safer. But they also make us question our rights and how much power the government should have over its people.
As we move into an era of heightened surveillance, we must ensure that we are not sacrificing our digital liberty in the name of digital security. The real danger might not be what we're trying to stop, but what we lose along the way.