Two Cheers for AI Ethics

An AI-powered robot attempts a shot during men’s basketball final match of the Tokyo 2020 Olympic Games: the problem is that AI systems are not governments that need to be nudged, encouraging the better angels of their nature. © Icon Sport/Getty Images
[Nikkei]
Last week, the United Nations Educational, Scientific and Cultural Organization announced that it had concluded the first global agreement on the ethics of artificial intelligence. Its Director-General, Audrey Azoulay, declared that this was a “major answer” to the need for rules to ensure that AI benefits humanity.
The caveats came thick and fast.
The United States and Israel, big players in AI, are not members of UNESCO. China signed off on text that AI systems “should not be used for social scoring or mass surveillance purposes” — tools that it has already deployed widely. More generally, the 26 page consensus document is a “recommendation” to be applied “on a voluntary basis… in conformity with the constitutional practice and governing structures of each state.”
So it is not exactly Isaac Asimov’s three laws of robotics.
Yet the text follows Asimov by assuming that the problem posed by AI is that we need new rules to regulate our silicon siblings. In the past five years, hundreds of such guides, frameworks and principles have been drafted by companies and governments, endorsed by everyone from NGOs to the Pope. Earlier this year the European Union unveiled a draft regulation on AI; in September, China offered up its own.
There is some variation. The EU focuses on individual rights, for example, whereas China’s concern seems to be the interests of the state. But the broad approach misunderstands the problem of regulating AI as both too hard and too easy. Too hard because it assumes that existing laws are inadequate to deal with AI.
In fact, most of these documents can be boiled down to a statement that people, companies and governments should not be able to do things using AI that they cannot do themselves.
Of course, AI systems should not be biased. Of course, they should respect privacy. Of course, people harmed by them should be able to get compensation.
Too easy because the real devil is in the detail of implementing existing rules to AI systems that are increasingly fast, opaque and autonomous. How will we know if they are biased? What data are they using? And who will assume responsibility for their actions?
Here the most useful innovation in the UNESCO document is the requirement for Ethical Impact Assessments before releasing AI systems into the market. Yet that points to a second problem with the consensus approach to AI ethics: not just how to regulate, but when.
The UNESCO model appears to follow the manner in which human rights gained acceptance over the second half of the 20th-century.
After World War II, the United Nations oversaw nonbinding approaches that ensured a big-tent approach to human rights like the 1948 Universal Declaration. The commitments were broad but shallow, enabling democracies and dictatorships to embrace high-minded rhetoric without fear of consequences.
Yet, nearly 80 years later, those vague agreements laid the foundation for binding commitments and universal periodic review to which all countries submit. It is far from perfect, but virtually no state now rejects human rights completely.
The problem is that AI systems are not governments that need to be nudged, encouraging the better angels of their nature. The challenge posed by AI may be more like climate change, where slow agreements and nonbinding targets fail to address structural problems like our dependency on hydrocarbons.
Global action is possible, but it is easiest when the goal is narrow and shared. When it was discovered that chlorofluorocarbons, or CFCs, were destroying the ozone layer, for example, we outlawed them.
An alternative approach to regulating AI at the global level would similarly focus on guarding against specific harms, like the development of AI systems that are uncontrollable or uncontainable, and global standards on transparency and explainability in practice.
A ban on lethal autonomous weapons is a red line that the International Committee of the Red Cross has proposed, speaking to the fears of many that AI systems may one day turn on their makers. Many states are hedging on whether such weapons may provide an edge in the battlefield, though it is possible that they may ultimately be stigmatized in the manner of chemical and biological weapons as inherently immoral.
The most important thing people forget about Asimov’s famous laws of robotics is that they did not work. If they had, his literary career would have been brief. In fact, the very first story in which he introduced the laws was about a robot paralyzed by contradictions as to how it should follow them.
AI offers enormous potential benefits for humanity. Reaping those benefits while minimizing or mitigating potential harms is certainly helped by agreeing that ethics are important in theory. The real test, however, will be how we implement those principles in practice.
This article first appeared in the Nikkei Asia on 3 December 2021 as “We all agree that AI needs global rules but who will enforce them?”