We have arrived at a turning point in our collective digital future. A powerful form of technology, artificial intelligence (AI), is able to find patterns in massive sets of unstructured data, improving its own performance as more data is introduced. AI’s capacity to rapidly enhance its own decision-making, while minimizing the kind of interference that tends to clutter human decisions, makes it extraordinarily powerful. But as we rely more and more on AI to guide our lives, major questions arise about human choice and inclusivity in the coming era. How will humans from different strata of power and income be engaged and represented in shaping the technological path and future of AI? How will we govern this “machine meritocracy”? Will it govern us?
Balance of Power
A shift in the balance of power between intelligent machines and humans is already visible. AI-driven machines create more value, produce more of our everyday products, and increasingly assume control of the design and decision-making processes that pervade our work and our lives.
Machines recognize our past patterns and those of (allegedly) like-minded people from across the world, creating an AI-driven feedback loop that shapes and reinforces our mindsets and world views.
More and more, we trust machines to “get us right,” to know us in ways that we may not know ourselves. Smart cars share our behavioural patterns with companies in exchange for customized conveniences, and video games feed our socioeconomic profiles and cognitive preferences back to their makers.
There are limitations: machines struggle to account for cognitive disconnects between what we purport to be and how we actually behave. Reliant on tangible data from our actions, the machine might act as a constraint, holding us to what we have been, rather than what we might hope to become.
The Ethics and Values of AI
AI’s growing role in our personal decision-making and identity raises a series of related ethical questions.
Will the machine eventually eliminate personal choice? There are tremendous upsides to allowing AI to help us make better decisions. For example, AI can help optimize our schedule and transportation routes in order to reduce our carbon footprint, and there is some evidence that AI may help us find better romantic partners. But will this deprive us of our independence, our capacity to make mistakes and learn from them?
Will AI drive us into polarization or even isolation? Already, machines have created bubbles filled with like-minded people through social media networks and other platforms, but it could go a step further: AI could become a tool for “digital social engineering,” creating micro-societies designed for economic efficiencies, political cohesion, and possibly monolithic identities.
Will AI replace our judgement? AI judges and engages us on the basis of our expressed values and actions, but it does not account for suppressed values, dormant thoughts, or emergent beliefs. In situations where there is no codified precedent for AI, it might rely on inapposite past actions, resulting in objectionable or even dangerous decisions. Will the machine respect our right to self-reinvention?
Will AI entrench discrimination? There is a risk that AI’s use of broad averages and past actions could entrench discriminatory practices. Uber’s AI programs, for example, have been found to discriminate against customers by using zip code data to identify where riders are likely to have originated. Moreover, a programmer’s inherent biases are often invisibly expressed in algorithm design—the flawed facial recognition program in Apple IPhones offers one example. As AI invisibly assumes control of our decisions, could we become more reliant on bias?
A New Charter of Rights
These questions show that the AI genie is very much out of the bottle. We should not, however, attempt to put it back in. Instead, we should look to harness the transformative potential of AI for our common good. We therefore propose a “Charter of Rights for the Global AI Revolution,” an inclusive, collectively developed, multi-stakeholder charter of rights that will guide the development of AI and lay the groundwork for the future of beneficial human/machine co-existence.
What would such a process look like? Ideally, the initiative would aim to create a global, multi-stakeholder institution for AI governance, with the capacity to track and analyse worldwide developments and discuss them together. Multi-sectoral participation would be crucial, and there would need to be recognition that promotion of innovation, openness and equity outweighed issues of sovereignty and national interests. Such an institution could act independently or under the auspices of the United Nations, though there would need to be better measures to ensure inclusivity than those currently in the Bretton Woods institutions.
Such an institution would require a foundational document to guide it, the Charter of Rights for the Global AI Revolution. Key questions that would need to be addressed within the Charter would include:
- How can a balance be struck between the transformative role of AI in creating “better” decisions, and the risks that it will impinge too deeply upon human decision-making?
- What role should AI play in socio-political processes such as elections, education, and opinion-forming?
- How can we ensure that data does not become discriminatory or used falsely to the harm of some?
- To what extent should AI focus on social benefits versus the rights of the individual?
- What kind of institution and sets of rules would best reflect the risks, benefits, and rapid transformations of AI?
Such a process will not be easy, but dialogue on these questions is crucial if we are to establish a relationship of trust in AI across the globe, and thus avoid the many risks this technology poses. The greatest risk is that we continue along the path we are currently on, allowing AI to gradually and often invisibly co-opt our collective and individual decision-making. The fourth industrial revolution—a revolution of cognition—demands a new approach to global governance if we are to capitalize on this unique opportunity to drive the fair and value-based development of all humanity.
Dr Olaf Groth is a Professor of Strategy, Innovation and Economics and Program Director for Disruption Futures at HULT International Business School. Dr Mark Nitzberg is Executive Director of the Centre for Human-Compatible AI at the University of California at Berkeley and serves as Principal and Chief Scientist at Cambrian.ai. Dr Mark Esposito is co-founder and President of Nexus FrontierTech and Professor of Business and Economics at Hult International Business School.
The opinions expressed in this article are those of the authors and do not necessarily reflect those of the Centre for Policy Research, United Nations University, or its partners.
Suggested citation: Olaf Groth , Mark Nitzberg and Mark Esposito., "AI & Global Governance: A New Charter of Rights for the Global AI Revolution," UNU-CPR (blog), 2018-10-15, https://unu.edu/cpr/blog-post/ai-global-governance-new-charter-rights-global-ai-revolution.