Blog Post

AI & Global Governance: Human Rights and AI Ethics

Why Ethics Cannot be Replaced by the Universal Declaration of Human Rights

Date Published
19 Jul 2019
Author
Cansu Canca

In the increasingly popular quest of trying to make the tech world ethical, a new idea has emerged: just replace “ethics” with “human rights”. Since no one seems to know what “ethics” means, it is only natural that everyone is searching for a framework that is clearly defined, and all the better if it fits onto a single, one-page document: the Universal Declaration of Human Rights (UDHR). Unfortunately, like many shortcuts, this one also simply does not solve the problem.

Let me start by summarizing the argument for using the UDHR to solve questions surrounding AI ethics. To spell out this argument, I will make use of this blog post and the report that it is based on from Harvard Law School: “Artificial Intelligence & Human Rights: Opportunities & Risks.” Here is the basic argument: The UDHR provides us with a (1) guiding framework that is (2) universally agreed upon and that results in (3) legally binding rules—in contradistinction to “ethics”, which is (1) a matter of subjective preference (“moral compass,” if you will), (2) widely disputed, and (3) only as strong as the goodwill that supports it. Therefore, while appealing to ethics to solve normative questions in AI gets us into a tunnel of unending discussion, human rights is the light that we need to follow to get out on the “right” side. Or so the argument goes.

Unfortunately, this argument is based on an overestimation of what the UDHR is capable of, and an underestimation of what ethics is and does—both of which are common mistakes. The UDHR consists of thirty articles that together draw a picture of what we consider to be of utmost importance for every individual, regardless of their particular demographics such as their culture, gender, or socioeconomic circumstances. It touches upon various aspects of human life from education to marriage, from religion to health, securing the universally held values on each of these matters. In that sense, the UDHR is universal and widely agreed upon. However, at that level of generalization, we also have an agreement in ethics. We hold ideas like happiness, satisfaction, self-governance, and justice as being morally valuable; we recognize the moral worth of health, education, work, and other core aspects of human life. Ethics and the UDHR are on the same page, if we keep it general. But questions about what is the right thing to do or what policy is the right one to implement become challenging only when these dearly held values conflict, necessarily involving trade-offs. When we dive deep, the UDHR is simply unable to guide us on those questions. Solving such challenges is the job of ethical reasoning.

Let me explain this using a case from the aforementioned report: the use of diagnostic AI systems in healthcare. AI systems can be used to improve diagnostics. The Harvard report lists positive and negative impacts that these systems could have on human rights. The accuracy and efficiency of diagnostic AI tools have a positive impact on the right to life, liberty, security, adequate standards of living, and education. In contrast, the need for data to construct these tools results in a negative impact on the right to privacy. So, what does this mean going forward – should we or should we not develop diagnostic AI tools?

One answer is to try and solve this conflict between the negative impact on the right to privacy and the positive impact on other mentioned rights through technical means. If we can develop methods through which diagnostic AI tools can be developed without violating privacy, then we do not have to engage in the balancing act between different rights. So far, so good. Note that we did not need the UDHR framework to get this result. Using utilitarianism or Kantian ethics, we would have concluded the same. We have a strong preference for privacy as well as a strong preference for better diagnostics. Any AI system that could satisfy both of these preferences would be the right one, in a utilitarian framework. Or, taking a Kantian approach, we would argue that treating people as ends requires obtaining their consent before accessing private information. It would have similarly required ensuring the development of means (such as efficient AI tools in healthcare) to help them remain healthy and alive to pursue their ends. Granted that limits of both of these obligations are open for debate, the ideal result would be to fulfill them both.

But what if it is not possible to have it all? What if we cannot respect privacy to the full extent if we want to develop efficient AI tools as soon as possible for all those patients who are suffering and/or dying right now? Using the UDHR framework, we do not have an answer. In fact, we are at a dead end. Once we take one step further from these non-binding universally agreed upon articles, we are at the level of the binding national laws—which are affected by the UDHR to varying degrees, but that inevitably inject their non-universal interpretations. Moreover, some countries adhere to the UDHR only in name, not spirit.

By contrast, as messy as ethics is, we are not yet in a dead end in dealing with this conundrum through ethics theory. In a utilitarian framework, we would try to maximize the efficiency of AI tools while minimizing the violations in privacy to reduce overall harm. If there were disputes, it would be about where to draw the line exactly, but the principle—including the principle for how to handle the trade-off—would be well-defined. In the Kantian framework, we might have to work out what privacy means, for example, in the context of de-identified data that could be traced back to individuals only through sophisticated means and whether there would be a strict rule against it. And in theories of social justice, we would look at the effects of these decisions on different segments of society.

In other words, in an easy case, ethics would give as straightforward of an answer as the UDHR. However, when there is conflict between different human rights, we would have to turn to principles of moral and political philosophy to reason further. Ethical reasoning would allow us to reach a set of justifiable actions, even when there is not a single right answer that prevails beyond any dispute. This is a clear advantage over the UDHR, where there is no guidance for resolving conflicts between different rights—revealingly the Harvard report also only demonstrates such conflicts rather than working them out.

I do not mean to say that the UDHR is not of any use in the discussion of ethical tech. Its clarity, legacy, and wide acceptance makes the UDHR a good tool to use to start the exploration on what might be problematic about any given AI system or practices in developing these systems. However, if the aim is not just to identify the problem but also to solve it, then the UDHR is simply inadequate to do so. Here, I invite you to engage in ethics.

 


Cansu Canca is a philosopher and the founder/director of the AI Ethics Lab where she leads teams of computer scientists and legal scholars to provide ethics analysis and guidance to researchers and practitioners. She holds a doctorate in philosophy specializing in applied ethics and works on ethics of technology and population-level bioethics with an interest in policy questions. Prior to the AI Ethics Lab, she was a lecturer at the University of Hong Kong, and a researcher at the Harvard Law School, Harvard School of Public Health, Harvard Medical School, National University of Singapore, Osaka University, and the World Health Organization.

The opinions expressed in this article are solely those of the author and do not necessarily reflect those of the Centre for Policy Research, United Nations University, or its partners.

Suggested citation: Cansu Canca., "AI & Global Governance: Human Rights and AI Ethics," UNU-CPR (blog), 2019-07-19, https://unu.edu/cpr/blog-post/ai-global-governance-human-rights-and-ai-ethics.

Related content