The urgency for artificial intelligence (AI) governance is now well understood, as is the complexity of such an endeavor, entailing numerous challenges. One of many open questions is whether such a governance system will place AI in the governed or governing role.
Two, perhaps sequential, scenarios can be foreseen, one offering opportunities and the other replete with risks: 1) Humans and human-run institutions will debate and decide rights and responsibilities of AI systems (AIs) in society and global governance, or 2) AIs will become so intelligent and so powerful that they dictate the rules.
Opportunities
The first scenario is in the early stages of development, lacking coordination and characterized by separate, mostly academic initiatives, apart from a few criticized publicity stunts. This scenario is based on moral and legal philosophy on the one hand and techno-optimism towards AI-integrated governance on the other hand.
Philosophical considerations about the moral status and potential personhood of machines and AIs have, for some time, accompanied technical developments. Examples are Allen et al., Chopra and White or Bostrom and Yudkowsky. Recent publications by Bayern and LoPucki focus on aspects of legal personhood for so-called algorithmic entities, which entails the right to privacy, property ownership or to enter into contracts.
A natural next step of this AI empowerment debate would be to explore if and how AIs should be granted active and equal participation in governing institutions. Consider Erdélyi and Goldsmith’s proposal to create an “International Artificial Intelligence Organization”, an institution resembling existing intergovernmental organizations. History has shown that discrimination and exclusion of (human) communities from governance is not sustainable and eventually led to the overthrow of such regimes. Yet, AIs may constitute such a community soon. Although the UN has been pioneering equality, inclusion and anti-discrimination, non-human entities have not been considered (yet), let alone addressed by the UN SDGs.
Techno-optimism towards AI-integrated governance is, on the one hand, sparked by the well-publicized recent successes of AI in rapid succession. On the other hand, it is based on findings in cognitive psychology: It has been shown that humans tend to make frequent errors in rational reasoning due to biases and faulty heuristics, which are related to the kluge structure of the human mind. Human imperfection in rationality, in combination with political short-termism and prioritizing national interests, may often lead to policies that are not ideal in the long run. The continuous absence of drastic actions against climate change at the international level would be one example.
In contrast, many researchers believe that AIs will be thoroughly rational and overall smarter than humans. This status is often referred to as artificial general intelligence. There has been major progress lately in various fields on machine learning and big data (e.g. self-driving cars, medical science or stock trading). Since political decision-making is based more and more on data, governmental institutions have already begun handing number crunching over to AI. Thus, there are opportunities for AIs to help humans to contribute to pressing issues such as the UN SDGs. A report by UN ESCAP outlines many examples of where AI could add value, such as maintenance of infrastructure and disaster preparedness.
Risks
This second scenario may occur later but is already gaining attention. Pioneers of this field are Bostrom and Yudkowsky, who, in a nutshell, foresee that AI may evolve to superintelligence, which would massively exceed human intelligence and would be difficult to control. The basic question is how to ensure that such AI pursues human goals and values, which are notoriously hard to define.
In addition to this so-called “value-loading problem”, Brundage et al. identified various other AI risks on digital, physical, and political security issues. Also, the particular threats of a military or general AI race have been explored, such as by Cave and ÓhÉigeartaigh. Several institutions consider these scenarios as global catastrophic or existential risks. However, UN bodies have only just begun discussing AI-related global risks, while other global risks, such as nuclear safety or climate change, dominate their agendas.
AI safety is an area of research aimed at reducing AI risks. Its relevance has been neglected outside academic circles in light of techno-optimism or for perceived gains in the AI race. Superintelligence skepticism has been linked to the interests of corporations to downplay such concerns. However, the Center for the Governance of AI is, for this reason, considering “centralized control over AI development, or extensive surveillance of AI projects with ready ability to shut them down”. In other words, it is proposing to not let AI participate in governance now in order to prevent AI totalitarianism later.
Conclusion
Given the rapid and changing nature of AI, it is recommended that international governmental institutions assess these considerations with priority, particularly the complex value-loading problem. While AI is a work in progress and many of the aforementioned risks may not evolve until later, given the unknown timescale for AI progress, it is better to start these debates sooner than later. An AI race is not desirable. It could be ideally constrained through diplomatic agreements and treaties, for instance, verified by a UN body similar to IAEA’s mandate towards nuclear safety.
Secondly, recognizing these manifold opportunities, advanced AIs roles in governance need to be considered. On the one hand, to apply their superior capabilities to transform our world according to the 2030 Agenda for Sustainable Development and to harness AI for good as indicated above; and on the other hand, whether non-human entities with moral status and personhood could be incorporated into existing multilateral frameworks.
The latter role may sound peculiar, but it would be a timely move against a potential new form of speciesism and a landmark step for humanity to be “ahead of the game” after being late all too often in the past. The practicalities will be complex and many questions will need to be asked.
When harnessing AIs, it must be kept in mind that it is a dual-use technology – AI applications could be beneficial in one way and harmful in another way. And when allowing AIs to advocate for themselves, they are more intelligent than humans and could be prima facie cooperative, yet may conduct later a treacherous turn, as described by Bostrom. Therefore, the implementation of the second recommendation should be carefully considered.
Dr Soenke Ziesche is currently a senior researcher on artificial intelligence at the Maldives National University. He has worked for the United Nations in humanitarian, recovery and sustainable development as well as in data and information management. He has also worked in the field in Palestine, Sri Lanka, Pakistan, Sudan, South Sudan and Libya.
The opinions expressed in this article are solely those of the author and do not necessarily reflect those of the Centre for Policy Research, United Nations University, or its partners.
Suggested citation: Soenke Ziesche., "AI & Global Governance: A Seat at the Negotiating Table for AI? Opportunities and Risks," UNU-CPR (blog), 2019-08-02, https://unu.edu/cpr/blog-post/ai-global-governance-seat-negotiating-table-ai-opportunities-and-risks.