It's time to set up a global regulatory regime for AI

In late March, the Future of Life Institute released an open letter calling for a pause in advanced AI experiments. Specifically, it wanted to pause 'the training of AI systems more powerful than GPT-4'. But isn’t that throwing the baby out with the bathwater? Łukasz Wordliczek argues that a viable solution to all the AI hype is to create a genuinely global regulatory regime

Setting the stage

In the past, you would have been able to find many people who had never heard of Artificial Intelligence (AI). These days, you'd struggle to find anyone. Much recent media coverage about AI developments paints a gloomy picture.

Indeed, public concerns, which appear well founded, stretch from work automation, to privacy and legal issues, to the threat to core moral values. Signatories to the Future of Life letter are calling for an immediate pause in all AI experiments. But is an all-out shutdown really a viable option?

Signatories to an open letter are calling for an immediate pause in all AI experiments before we risk 'losing control of civilisation'. But is an all-out shutdown really viable?

AI is only one of many potentially malignant technologies, including human cloning, germline modification, gain-of-function research, and eugenics. To make their argument even more persuasive, signatories to the letter ask: dare we risk losing control of our civilisation?

Later on in the letter, matters become more nuanced. Signatories suggest a defence against imminent AI threats that's taken from well-known playbooks: shut down flourishing research. This radical move, the letter suggests, would buy time for the introduction of safeguards that could mark out future areas of safe AI development. So, what should we do?

The need for a regulatory framework

One solution is already at our fingertips: a regulatory regime with effective sanctions for prospective violators. Let’s be clear: AI is, to a large extent, like a black market. There will always be supply, but only if there is demand. Think of illegal drugs, arms, stolen artefacts, rare species, and even enslaved humans. Yet if the risk of delivering advances is too high in relation to the potential backlash, AI developments become less attractive to their would-be creators.

Thus, it seems that we must establish a universal (global) regulatory framework with robust safeguards enforcing its principles. Of course, this framework will not render all malign actors obsolete. But can we be absolutely sure that any of these actors is not currently developing a potentially calamitous technology?

AI is like a black market. Yet if the risks of development are too high in relation to the potential backlash, developments become less attractive to their would-be creators

Regulatory regimes vary in their effectiveness, but they can work. Varieties of Democracy (V-Dem) and its sister project Digital Society Project (DSP) offer exemplary proof. Their data shows that regulations are more effective if there is greater public awareness, and institutional scrutiny, of online privacy rules. Again, there is, respectively, demand and supply. Democracies make it easier to meet these two requirements but, equally, democracies are not privileged, exclusive milieux. Quite the contrary: V-Dem and DSP data shows that less consolidated democracies and hybrid regimes, too, may catch up in the race to enforce certain regulations. This brings us to the crux: what kind of regulations are we talking about?

Lessons learned

We already know the solution. Let's take as an example the civil airline industry and its umbrella institution: the International Civil Aviation Organization (ICAO). This is a genuinely global body because its membership (193 as of mid-April 2023) is equal to the UN (to which, technically, ICAO is affiliated). This means that countries governed by very different regimes all sit at the same table. Furthermore, all ICAO member countries are subject to the same decisions, standards, procedures, and other regulatory measures. Most importantly: ICAO's two main objectives are the development of civil aviation, and air safety. Both aims are consequential here.

The former directly addresses the main argument of this post. That is, we do not need overly cumbersome research; rather, we should concentrate on nourishing growth in the relevant field. And ICAO's latter goal – safety – should be our focus for the development of AI. People often grumble about airline travel inconveniences: seats are uncomfortable, security checks are too strict and stressful, getting from an airport to our destination is a trip in itself, etc... But all these inconveniences are enforced in pursuit of a more noble goal: the safety of human beings.

Lessons to learn

So, yes: it is not too late to build an ICAO-like regulatory regime for AI that reaches far beyond today’s timid attempts. AI-enhanced technologies will become even more ubiquitous in the future. The agenda of this regime, therefore, should be broad enough to encompass every country in the world. It should help make AI developments safer, reducing the chances of 'algorithmic black swans'. Finally, the architects of this regime must equip it with sanction instruments to make infringements less likely.

None of these goals would be possible if we put aside research and development. If we think strategically, it's clear that prospective black-market gains are too high to ignore. There is no guarantee that malign labs won't continue to train advanced AI models.

The new regulatory regime must encompass every country in the world, and should include sanction instruments to discourage infringements

Though none of the measures currently in place would be likely to achieve the aforementioned goals, many people would welcome them. Many others, however, will criticise them for being too US-focused, American-driven or designed to impose 'Western values'. One of the most important puzzles for the people designing the regime, therefore, will be to ensure that actors advocating for 'internet sovereignty' are on board with the project.

Technology, any technology, may be used for purposes malign or benign. But it is not in technology’s capacity, or responsibility, to make decisions about ethics. Moral dilemmas belong to human beings. And we need a guiding framework through which to pursue actions that are beneficial to the majority of humanity. There is no time to waste: if we fail to take appropriate and commensurate action now, the people developing AI technologies will do it on their own terms, promoting internet sovereignty-like AI extensions. By then, however, it will be too late.

This article presents the views of the author(s) and not necessarily those of the ECPR or the Editors of The Loop.


photograph of Łukasz Wordliczek
Łukasz Wordliczek
University Professor, Institute of American Studies and Polish Diaspora, Faculty of International and Political Studies, Jagiellonian University

Łukasz teaches courses on US and Polish political systems and foreign policy.

He is the author, co-author and editor of nine books, several dozen articles and conference papers on relations between technology and politics, non-state actors, public policy and political science methodology.

His two most recent articles are: Beyond Bag of Words and Monolingual Models? A Machine Translation Solution to Solving Classification Tasks for Comparative Research (with Akos Mate, Miklós Sebők, Dariusz Stolicki and Ádám Feldmann) and Neural Networks and Political Science: Testing the Methodological Frontiers (both forthcoming in early 2023).

Łukasz is currently setting up the Polish branch of the Comparative Agendas Project.

Read more articles by this author

Share Article

Republish Article

We believe in the free flow of information Republish our articles for free, online or in print, under a Creative Commons license.

Creative Commons License


Leave a Reply

Your email address will not be published. Required fields are marked *

The Loop

Cutting-edge analysis showcasing the work of the political science discipline at its best.
Read more
Advancing Political Science
© 2024 European Consortium for Political Research. The ECPR is a charitable incorporated organisation (CIO) number 1167403 ECPR, Harbour House, 6-8 Hythe Quay, Colchester, CO2 8JF, United Kingdom.
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram