Blog Post

AI & Global Governance: Three Distinct AI Challenges for the UN

Anticipating the challenges of Artificial Intelligence.

Date Published
7 Dec 2018
Author
Nicholas Wright

Everybody seems to agree that Artificial Intelligence (AI) is – and will be – a determining factor for the future of humanity. From what it means to be human, to the social impacts of laying off Uber drivers once cars drive themselves, to AI propaganda in politics, to the rise of the robots or a superintelligence exterminating humanity. But what does AI’s bewildering profusion of implications mean for the global order? Anticipating the challenges of AI requires breaking them down into more manageable bites because failure in any one of these three distinct bundles of challenges would be catastrophic.

"Singularity" and human existentialism

Though artificial superintelligence is likely at least a couple decades away, "singularity" is the single biggest concern for many AI scientists. Singularity is the notion that exponentially accelerating technological progress will create a form of AI that exceeds human intelligence and escapes our control. The concern is that this superintelligence may then deliberately or inadvertently destroy humanity or usher in an era of plenty for its human subjects. As Henry Kissinger describes, the catastrophic consequences may not only be physical but also apply to humans’ conceptions of themselves. For him, the most important question is: “what will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms meaningful to them?” Given the rate of progress, singularity may occur at some point this century.

But although clearly momentous, given that nobody knows when, if, or how a possible singularity will occur, limits clearly exist on what can sensibly be said or planned for in the present. Previous existential technologies have emerged: nuclear weapons can obliterate humanity. Nuclear weapons provide a useful, although imperfect, analogy of global efforts to manage or prevent singularity. Preventing nuclear war required careful management and luck, which we will need again. Preventing nuclear proliferation is tough, and despite considerable success, we could not prevent North Korean nuclear weapons. Could one persuade Russian, Chinese, or US leaders to stop AI programmes viewed as vital for their security? Indeed, this is more troublesome than Kissinger’s further concerns about human understanding of our own nature. Human egocentrism is remarkably robust – if we can (despite wobbles) deal with Darwin telling us we are just hairless apes, we can survive this new disclosure.

The bottom line is that, just like nuclear weapons, singularity-related issues will require managing these issues within the international order as best we can, although our best will inevitably be grossly imperfect. Singularity potentially represents a qualitatively new challenge for humanity that we need to think through and discuss internationally.

Change in the means of production across social sectors

A second basket of challenges arises because AI and big data will radically change the means of production across many economic and societal sectors. There will be winners, losers and new ways of doing things, which will roil societies across the globe. We might call this an “nth Industrial Revolution,” as the popular term “fourth industrial revolution” has been around since the 1940s.

Consider three sectors. One now-classic example is transport: after Uber rolls out self-driving cars, where will all the unemployed drivers work? Another sector is the military. Drones and AI will likely contribute to a revolution in military affairs, which may be destabilizing, hinder international cooperation efforts, and instigate new arms races unlike the world has ever witnessed. A third example is the colossal health sector, accounting for some 18% of US GDP, where AI promises to change how medical decisions are made and care is delivered. Of course, these are only three examples of industries in which AI will exert disruptive forces but one can point to essentially any social sector.

But not much – so far, at least – suggests this will be bigger than other technological impacts, such as those contributing to the industrial revolution itself or the internet’s rise in the 1990s-2000s. Uber drivers being sacked is not much different to the internet eliminating many retail jobs with Amazon’s rise. The airplane, steamship, machine gun or tank all revolutionized warfare; and so did the internet, with cyber now a military domain alongside land, sea, air, and space. The potential change in healthcare is exciting but, for better or worse, powerful regulatory and institutional factors make the health sector as nimble as a supertanker.

These changes and their attendant disruptions will require management, just as welfare states were created and adapted to manage the social disruptions from industrialization. It requires sector-by-sector planning. Much will rely on relatively straightforward, although politically challenging, means such as welfare-nets and retraining for the swathes of workers whose jobs become obsolete.

20th Century history illustrates how failure to manage social dislocations or new military technologies disrupts global order. Another crucial ingredient is the competing systems of social organization.

Competing social systems in the global order 

For the first time since the end of the Cold War, AI is enabling a plausible competitor to liberal democracy: digital authoritarianism. Specifically, AI’s greatest impact on competition in the global order will be to enable a new system of social organization, one which provides a plausible path forward for big industrially sophisticated states to make their citizens rich and maintain rigid control over the citizenry. AI-related technologies enable such a system, which China is now in the process of building – and it is already being exported and emulated in a global competition with liberal democracy.

AI has already helped crack the prevailing dichotomy – oppressive governments, by and large, remain poor and democratic governments reap economic benefits. China, along with several like-minded countries, is testing the boundaries of the global order and forcing international institutions to reassess their role in the coming decades. China’s decision to deploy emerging technologies for censorship and surveillance is not stagnating the country’s economy, as may have been predicted in the past. Moreover, China is profiting from sales of technology that is seen to be authoritarian-regime enabling in order to bolster censorship in Sri Lanka, and has supplied surveillance equipment to Ethiopia, Iran, Russian, Zambia, and Zimbabwe. New relationships in Africa include Chinese efforts to export digital such technologies in areas of strategic importance to the US and the global order.

Such pervasive integrated surveillance will complicate efforts by institutions like the UN from implementing its mandate to protect human rights. The vast surveillance apparatus built around ubiquitous technologies such as smartphones and CCTV, coupled with colossal integrated databases, are inherently dual-use. The ambiguity of dual-use systems facilitates plausible deniability. China responded to the UN’s report on the detention of ethnic Uyghurs and other Muslim minorities claiming the allegations were simply untrue, and that such measures are in support of “maintaining lasting peace and security in Xinjiang,” leaving the UN without the institutional mechanisms to protect human rights.

But what can be done? Very little outside influence on a highly capable and confident nation such as China can change their trajectory, although this must not preclude expressing moral concerns or the role of international institutions like the UN in maintaining peace and security between rivaling socio-political orders.

Instead, one must learn to manage escalating competition between digital authoritarian and liberal democratic states themselves — in particular, Sino-US escalation — as well as the competition for influence between systems that will extend to regions across the globe. Domestically, established liberal democracies must also, for instance, constrain the mass use of an amalgam of multifarious data on their own populations: not only by the tech oligopoly but also by the state.

Global strategy must address all three areas

Each of the three bundles of challenges requires different thinking—and policies—at the level of the UN, national and regional governments, businesses and other stakeholders. Tackling all three is necessary for a global strategy.

First, the UN must determine its role in this global strategy if it is to remain relevant in an age where digital authoritarianism is threatening democratic institutions. Much like its role during nuclear non-proliferation discussions, the UN must be able to navigate the social disruptions resulting from ubiquitous AI adoption with finesse.

Second, the UN must support and encourage national and international dialogues on how AI is changing the nature of work and the means of production across societies. This effort must successfully cross various societal and industrial sectors and must balance competing interests and values. The International Labour Organization contributes to this area.

Finally, the UN must address the rise of oppressive regimes that are enabled by emerging and converging technologies in a way that manages change within a robust the global order, secures space for democratic societies and protects fundamental human rights.

 


Dr Nicholas Wright MRCP is an Affiliated Scholar at the Pellegrino Center for Clinical Bioethics Georgetown University Medical Center, an Honorary Research Associate at the Institute of Cognitive Neuroscience, University College London, and a Fellow at New America.

The opinions expressed in this article are those of the author and do not necessarily reflect those of the Centre for Policy Research, United Nations University, or its partners.

Suggested citation: Nicholas Wright., "AI & Global Governance: Three Distinct AI Challenges for the UN," UNU-CPR (blog), 2018-12-07, https://unu.edu/cpr/blog-post/ai-global-governance-three-distinct-ai-challenges-un.

Related content