The Case for an International Artificial Intelligence Agency

Self-regulation and national regulation of AI won’t be enough. Even the EU can’t save us. The case for an International Artificial Intelligence Agency.

 

Earlier this year, the European Union proposed a draft regulation to protect the fundamental rights of its citizens from certain applications of artificial intelligence (AI). In June, the Biden Administration launched a task force seeking to spur AI innovation. On the same day, China adopted its new Data Security Law, asserting a stronger role for the state in controlling the data that fuels AI.

These three approaches — rights, markets, sovereignty — highlight the competing priorities as governments grapple with how to reap the benefits of AI while minimizing harm.

A cornucopia of proposals offers to fill the policy void. For the most part, however, the underlying problem is misconceived as too hard or too easy. Too hard, in that great effort has gone into generating ethical principles and frameworks that are unnecessary or irrelevant, since most essentially argue that AI should obey the law or be “good”. Too easy, in that it is assumed that existing structures can apply those rules to entities that operate with speed, autonomy, and opacity.

Personally, I blame Isaac Asimov.

Asimov is regularly quoted for his famous three laws of robotics. This is good science fiction, but if his laws had actually worked then his literary career would have been brief.

The future of regulating AI will rely on laws developed by states and standards developed by industry. Unless there is some global coordination, however, the benefits of AI — and its risks — will not be equitably or effectively distributed.

Useful lessons can be taken here from another technology that was cutting-edge when Asimov started publishing his robot stories: nuclear energy.

First, it is a technology with enormous potential for good and ill that has, for the most part, been used positively.

Observers from the dark days of the Cold War would have been pleasantly surprised to learn that nuclear weapons were not used in conflict after 1945 and that only a handful of states possess them the better part of a century later.

The international regime that helped ensure this offers a possible model for global regulation of AI.

The grand bargain at the heart of President Eisenhower’s 1953 “Atoms for Peace” speech and the International Atomic Energy Agency (IAEA), was that nuclear energy’s beneficial purposes could be shared with a mechanism to ensure that it was not weaponized.

The equivalent weaponization of AI — either narrowly, through the development of autonomous weapon systems, or broadly, in the form of a general AI or superintelligence that might threaten humanity — is today beyond the capacity of most states.

For weapon systems at least, that technical gap will not last long.

Another reason for the comparison is that, as with nuclear energy, it is the scientists deeply involved in AI research that have been the most vocal in calling for international regulation.

The various guides, frameworks, and principles that have been proposed were largely driven by scientists, with states tending to follow rather than lead.

As the nuclear non-proliferation regime shows, however, good norms are necessary but not sufficient for effective regulation.

Of course, the limitations of an analogy between nuclear energy and AI are obvious.

Nuclear energy involves a well-defined set of processes related to specific materials that are unevenly distributed; AI is an amorphous term and its applications are extremely wide. The IAEA’s grand bargain focused on weapons that are expensive to build and difficult to hide; weaponization of AI promises to be neither.

Nonetheless, some kind of mechanism at the global level is essential if regulation of AI is going to be effective.

Industry standards will be important for managing risk and states will be a vital part of enforcement. In an interconnected world, however, regulation premised on the sovereignty of territorially-bound states is not fit for purpose. The hypothetical International Artificial Intelligence Agency (IAIA) offered here is one way of addressing that structural problem.

Nevertheless, the biggest difference between attempts to control nuclear power in the 1950s and AI today is that, when Eisenhower addressed the United Nations, the effects of the nuclear blasts on Hiroshima and Nagasaki were still being felt. The Soviet Union had tested its own devices and the knowledge seemed likely to be spread to others — perhaps all others.

There is no such threat from AI at present and certainly no comparably visceral evidence of its destructive power. Absent that threat, getting agreement on meaningful regulation of AI at the global level will be difficult.

Yet, even if it is difficult to create a global institution to prevent the first true AI emergency, it may be impossible to prevent the second.

Simon Chesterman is Dean of the National University of Singapore Faculty of Law and Senior Director of AI Governance at AI Singapore. His latest book is “We, the Robots? Regulating Artificial Intelligence and the Limits of the Law”, now available worldwide.

This article first appeared in Engineering & Technology Magazine on 4 August 2021.