Blog Post

The Ethical Anatomy of Artificial Intelligence

Following the UN Secretary-General's establishment of a High-Level Panel on Digital Cooperation, what will the governance of AI look like?

Date Published
29 Jul 2018
Author
Eleonore Pauwels

No other technology has recently generated so many hopes and fears, expectation and trepidation, celebration and condemnation as Artificial Intelligence (AI). Even in just the past few months, there has been an endless array of headlines about technological promises too powerful for humankind to refuse; battles being waged for AI supremacy; and algorithms capable of both social nudging as well as adversarial attacks on our cyber infrastructure. As a species of storytellers, we tend to be thrilled by such narratives of survival.

Mapping the ethical anatomy of an emerging technology like AI is quite a different task. To succeed, such an exercise has to be inclusive, bridging disciplines and cultures.

The challenge of AI

In its current form, called deep learning, AI optimizes predictive reasoning by learning how to identify and classify patterns within massive amounts of data. With this super-computing efficiency – being able to simulate myriads of scenarios in seconds – deep learning offers unmatched investigative opportunities, such as comparing genomes within an entire population, recognizing a certain face out of a crowd, or labelling any location on earth based on millions of pictures.

It is therefore no surprise that multilateral institutions are taking notice. The UN Secretary-General António Guterres has just established a High-level Panel on Digital Cooperation to foster a “broader global dialogue on how interdisciplinary and cooperative approaches can help ensure a safe and inclusive digital future for all." My own interest spans the last five years thinking about how cooperation and governance apply to my field of expertise, AI and emerging technologies.

What does “cooperation” mean in a world where only a small proportion – about 0.004% of the global population – have the knowledge and power to build machines that are intelligent enough to potentially decide who wins on the job market, who can obtain insurance or has the upper-hand in the courtroom, or whose DNA or behavioural patterns will be mined by marketers? Never have we faced a technology like AI – whose design is in the hands of a few, and who are mostly born in societies of abundance, yet a technology powerful enough to shape multifaceted aspects of our lives. This asymmetry of knowledge and power raises significant challenges for global cooperation.

Only a diversity of knowledge and experience will help foster diligent technical design, anticipate ethical failures, and minimize the risks of unintended harms. For instance, it took a group of engineers who call themselves “black in AI” to reveal the troubling ways facial recognition technologies fail to trace the features of individuals with darker skin tones.

This is exactly why we need to think and talk about digital cooperation. This strategic process should engage, not only the best and the brightest, but also the broadest range of engineers, academics, and civil society activists among others to figure out together what kind of world we want to live in and how AI can help us achieve the positive technical and ethical outcomes that will lead to that future.

If we, as global citizens, want to reclaim our technological future, we need to carefully consider the risks involved and the sources of inequalities and disempowerment that hide in mundane algorithmic designs. Access to the knowledge and education required for designing and anticipating the role of an emerging technology like AI is still a luxury.

How can our far-reaching algorithmic inventions be designed and governed so that they meet the ethical needs of a globalizing world?

Learning from experts

To reflect on such questions, I joined the Centre for Policy Research at United Nations University, the UN-focused policy think tank that is examining a raft of global public policy challenges, from modern slavery to emerging cybertechnologies. A few weeks ago, we were fortunate enough to gather, opposite to the United Nations Headquarters, a braintrust of experts who are interested in discussing ways to govern artificial intelligence (AI). This symposium left me with a number of lingering thoughts.

We are witnessing a swelling public anxiety about the loss of control to an algorithmic force, which seems to escape our modes of understanding, trust and accountability. Using different examples at the intersection of AI, security, ethics and human rights, several participants (re)-opened my eyes to the fact that societies are governed by human-made technologies as much as by the rule of law. One of the principal concerns at the heart of governing AI therefore becomes whether we will eventually find ourselves controlled by powerful technical systems whose design we did not fully understand and whose ramifications we did not anticipate.

Many brilliant technologists and philosophers have challenged the mundane assumption that technology is an apolitical and amoral force. They have argued that complex technologies are inherently dual-nature, not just dual-use. This argument goes back to insights developed from decades ago in the field of Science and Technology Studies, explaining that modern technology is essentially co-produced by scientific and societal actors. Modern technology is not about algorithms as abstract artefacts used for good or bad, but algorithms endowed with a specific structuring function, a certain power and nature, as we humans have designed them. It is this structuring function, the technological design process itself – and the real-world implications these design choices have once technologies are unleashed – that are increasingly under scrutiny within society.

To crack the mysteries of complex data-based problems, like those we face in genetics or climate sciences, AI will be an unparalleled ally. Yet, this form of predictive intelligence could also magnify risks or unknowns that are difficult to anticipate, such as false positives or biases. It could also optimize situations in ways that may conflict with our societal values and more generally reflect the priorities, preferences, and prejudices of those who have the power to shape AI.

At the Centre for Policy Research’s recent symposium, the author of the Coded Gaze, Joy Buolamwini from MIT Media Lab, gave a brilliant demonstration of how current facial recognition algorithms, in their functioning nature and optimization processes, fail to discern the features of African-Americans. Just as if our lenses or our gaze had been subverted.

Minorities could be stigmatized and ostracized in new powerful ways. As explained by Dina PoKempner from Human Rights Watch, it is urgent to also consider and assess how specific AI applications might violate different human rights. As AI is folded into a range of new biotechnological and behavioural technologies, these questions take on new dimensions.

The Internet of Bodies

In a recent New York Times’ article, is reported that governments are using facial recognition software to shame jaywalkers, pick out rioters in a crowd, and identify dissidents or protesters that the government may wish to track or intimidate. Databases of faces, financial and personal information are connected in order to rate credit score, job applications and the loyalty of Chinese citizens to the State.

In the near-future, biosensors and algorithms will together capture and analyze an ever more refined record of our biometrics, vital signs, emotions and behaviours. AI will watch, track and evaluate us: we will go from the predictive power of one algorithm to the next. We may unwittingly give up to algorithmic networks unprecedented access to our bodies, genomes and minds and create possibilities for social and bio-control that surpass Foucauldian nightmares. I call this set of networks the “Internet of Bodies.”

Never before has our species been equipped to monitor and sift through human behaviours, physiology and biology on such a grand scale. Geopolitical tensions might rise when states that have the know-how to harness AI will commodify, for a very high value, the biological data of other countries’ populations and ecosystems.

This prompts a broader philosophical question: what are the implications of living with ubiquitous networks of self-learning machines that acquire and deploy knowledge using reasoning, rules and values that we no longer understand or share?

Creating feedback loops

In the biological world, feedback loops are crucial for species to rapidly and successfully adapt to stressors in an evolving ecosystem. Biological resilience comes from an artful mix of genetic diversity and systems’ regulation. Similar regulatory feedback loops are crucially needed in AI.

From Microsoft, IntelGoogle and IBM, major AI labs in the world have recently published social responsibility principles, showing interest in self-regulation and taking on real-world problems. Yet, if these principles do not materialize into practices, intelligent technologies still risk being designed to prioritize exponential financial growth in societies of abundance. Conversations within tech inner circles close rather than open the prospect for us, citizens of the world, to decide what we want our futures to be.

It is time to pause, reflect and ask these pertinent questions. How can we foster a global, inclusive and complex model of cooperation and governance in AI? The United Nations’ High-level Panel on Digital Cooperation could help public and private actors frame the modalities of such a cosmopolitan conversation.

First, we need to think about ways to counterbalance the asymmetry of power between the supposed tech-leaders and the tech-takers. As brilliantly said by David Li from Shenzhen Open Innovation Lab at the recent Centre for Policy Research symposium, we should experiment with “AI from the street,” giving diverse communities around the globe an opportunity to learn how to turn their data, ideas and designs into AI innovation. What if our cities could be globally connected, yet locally inventive and inspired by a diversity of knowledge and vision?

The same knowledge-sharing should take place between governments with the hope to foster strategic foresight dialogues and offset current rhetoric of an AI economic and military race.

Second, we need to discuss how to build a social license for AI, including new incentive structures to encourage state and private actors to align the development and deployment of AI technologies with the public interest. Technologies of humility, a term coined by Sheila Jasanoff from Harvard Kennedy School, will depend on more than safe algorithms and well-curated data. It will rest on the humility of those who thought they could fully master AI, and the empowerment of others who can imagine globally beneficial intelligent designs.

A more equal and peaceful world will not appear by chance. We need to foster an inclusive, “cosmopolitan” conversation to anticipate and shape, not only the risks but also AI’s promises.

 

Suggested citation: Eleonore Pauwels., "The Ethical Anatomy of Artificial Intelligence," UNU-CPR (blog), 2018-07-29, https://unu.edu/cpr/blog-post/ethical-anatomy-artificial-intelligence.

Related content

Article

AI’s Critical Digital Transformation Role in Public Governance

By aligning with the AU’s strategies, South Africa can set a precedent for how digital transformation can serve as a catalyst for governance.

11 Dec 2024