In late March, the Future of Life Institute released an open letter calling for a pause in advanced AI experiments. Specifically, it wanted to pause 'the training of AI systems more powerful than GPT-4'. But isn’t that throwing the baby out with the bathwater? Łukasz Wordliczek argues that a viable solution to all the AI hype is to create a genuinely global regulatory regime
In the past, you would have been able to find many people who had never heard of Artificial Intelligence (AI). These days, you'd struggle to find anyone. Much recent media coverage about AI developments paints a gloomy picture.
Indeed, public concerns, which appear well founded, stretch from work automation, to privacy and legal issues, to the threat to core moral values. Signatories to the Future of Life letter are calling for an immediate pause in all AI experiments. But is an all-out shutdown really a viable option?
Signatories to an open letter are calling for an immediate pause in all AI experiments before we risk 'losing control of civilisation'. But is an all-out shutdown really viable?
AI is only one of many potentially malignant technologies, including human cloning, germline modification, gain-of-function research, and eugenics. To make their argument even more persuasive, signatories to the letter ask: dare we risk losing control of our civilisation?
Later on in the letter, matters become more nuanced. Signatories suggest a defence against imminent AI threats that's taken from well-known playbooks: shut down flourishing research. This radical move, the letter suggests, would buy time for the introduction of safeguards that could mark out future areas of safe AI development. So, what should we do?
One solution is already at our fingertips: a regulatory regime with effective sanctions for prospective violators. Let’s be clear: AI is, to a large extent, like a black market. There will always be supply, but only if there is demand. Think of illegal drugs, arms, stolen artefacts, rare species, and even enslaved humans. Yet if the risk of delivering advances is too high in relation to the potential backlash, AI developments become less attractive to their would-be creators.
Thus, it seems that we must establish a universal (global) regulatory framework with robust safeguards enforcing its principles. Of course, this framework will not render all malign actors obsolete. But can we be absolutely sure that any of these actors is not currently developing a potentially calamitous technology?
AI is like a black market. Yet if the risks of development are too high in relation to the potential backlash, developments become less attractive to their would-be creators
Regulatory regimes vary in their effectiveness, but they can work. Varieties of Democracy (V-Dem) and its sister project Digital Society Project (DSP) offer exemplary proof. Their data shows that regulations are more effective if there is greater public awareness, and institutional scrutiny, of online privacy rules. Again, there is, respectively, demand and supply. Democracies make it easier to meet these two requirements but, equally, democracies are not privileged, exclusive milieux. Quite the contrary: V-Dem and DSP data shows that less consolidated democracies and hybrid regimes, too, may catch up in the race to enforce certain regulations. This brings us to the crux: what kind of regulations are we talking about?
We already know the solution. Let's take as an example the civil airline industry and its umbrella institution: the International Civil Aviation Organization (ICAO). This is a genuinely global body because its membership (193 as of mid-April 2023) is equal to the UN (to which, technically, ICAO is affiliated). This means that countries governed by very different regimes all sit at the same table. Furthermore, all ICAO member countries are subject to the same decisions, standards, procedures, and other regulatory measures. Most importantly: ICAO's two main objectives are the development of civil aviation, and air safety. Both aims are consequential here.
The former directly addresses the main argument of this post. That is, we do not need overly cumbersome research; rather, we should concentrate on nourishing growth in the relevant field. And ICAO's latter goal – safety – should be our focus for the development of AI. People often grumble about airline travel inconveniences: seats are uncomfortable, security checks are too strict and stressful, getting from an airport to our destination is a trip in itself, etc... But all these inconveniences are enforced in pursuit of a more noble goal: the safety of human beings.
So, yes: it is not too late to build an ICAO-like regulatory regime for AI that reaches far beyond today’s timid attempts. AI-enhanced technologies will become even more ubiquitous in the future. The agenda of this regime, therefore, should be broad enough to encompass every country in the world. It should help make AI developments safer, reducing the chances of 'algorithmic black swans'. Finally, the architects of this regime must equip it with sanction instruments to make infringements less likely.
None of these goals would be possible if we put aside research and development. If we think strategically, it's clear that prospective black-market gains are too high to ignore. There is no guarantee that malign labs won't continue to train advanced AI models.
The new regulatory regime must encompass every country in the world, and should include sanction instruments to discourage infringements
Though none of the measures currently in place would be likely to achieve the aforementioned goals, many people would welcome them. Many others, however, will criticise them for being too US-focused, American-driven or designed to impose 'Western values'. One of the most important puzzles for the people designing the regime, therefore, will be to ensure that actors advocating for 'internet sovereignty' are on board with the project.
Technology, any technology, may be used for purposes malign or benign. But it is not in technology’s capacity, or responsibility, to make decisions about ethics. Moral dilemmas belong to human beings. And we need a guiding framework through which to pursue actions that are beneficial to the majority of humanity. There is no time to waste: if we fail to take appropriate and commensurate action now, the people developing AI technologies will do it on their own terms, promoting internet sovereignty-like AI extensions. By then, however, it will be too late.