AI researchers may look back on 2018 as the year that human rights became crucial to advancing the technology. Over the last six months of the year, a slew of reports focused on “artificial intelligence and human rights” were published by a variety of well-respected entities, including: the most recent report of the UN Special Rapporteur on freedom of opinion and expression, Berkman Klein's report on “Artificial Intelligence & Human Rights: Opportunities & Risks”, Access Now's “Human Rights in the Age of Artificial Intelligence” report, The Council of Europe's Draft Recommendation of the Committee of Ministers to member States on human rights impacts of algorithmic systems, and Business for Social Responsibility's, “Artificial Intelligence: A Rights-Based Blueprint for Business” series.
Earlier in the year, I was asked to help kick off a workshop organized by Data & Society on the same topic and I wrote this post based on the remarks I prepared for that conference, supplemented by a few takeaways from recent reports.
I come to this issue as a trained lawyer who spent the last decade working on human rights, with a special focus on the issues of “business and human rights” and human rights online. I have seen how the international human rights (IHR) framework can enable better understanding and contestation of human rights norms, monitor and mitigate the risk of human rights abuses, generate input and output legitimacy, and facilitate trust and coalition-building. Specifically, I see four independent and mutually-reinforcing utilities that the IHR framework could contribute to the design, use, and governance of AI.
Providing Agreed Norms for Assessing and Addressing Impacts
First, the IHR framework provides a set of agreed norms that can help us understand the relevance of certain impacts that AI could have, and thus illuminate potential benefits, risks, and intervention points. As a recent report from the esteemed Berkman Klein Center at Harvard makes clear, AI is being used today to primarily supplement, refine, and enhance existing information management and decision-making processes, many of which are already governed by existing rules, including laws and regulations. To the extent that many of the near-term impacts of AI will constitute changes in degree, not changes in kind, there may be utility in structuring both our examination of the social impacts of AI–and the adequacy of existing and potential new rules–through the established framework provided by IHR.
Looking through the prism of this existing framework can help ground what can be devilishly complex and sometimes abstract considerations. As BSR’s Dunstan Hope and Mark Hodge have noted, it can also help us identify similarities with other, historic and contemporary policy challenges, which can help us identify existing laws and mechanisms that may be appropriated and applied to this new space. This is important because the adaptation of existing mechanisms can often be done with more alacrity than it would otherwise take to establish new ones.
Since I first wrote on this topic, the UN Special Rapporteur for freedom of opinion and expression, David Kaye, has released a wonderful report on artificial intelligence and human rights that demonstrates and validates this argument in practice. His report uses IHR to articulate several key areas of concern and elaborates several important recommendations for how States and companies can better anticipate and mitigate potential human rights impacts of AI-related activities.
Of course, there is plenty of disagreement among states, let alone other stakeholders, about what particular human rights mean in theory and in practice. Yet, it is hard to imagine how starting from a less established baseline wouldn’t lead to even more divergence.
A Shared Language Can Facilitate Understanding and Engagement
Second, the IHR framework provides a shared language that lowers barriers to entry and engagement, which in turn can generate more diverse, creative thinking. This can enhance both the effectiveness and legitimacy of outcomes.
Of course, the IHR framework is not immune from barriers to participation. For instance, the Berkman report rightly points out the relatively underdeveloped nature of relevant doctrines and procedures related to “economic, social, and cultural rights”, as compared to “civil and political rights”. Nevertheless, the framework has been used for more than half a century as a globally acknowledged language for discussing and contesting understandings of social needs, impacts, responsibilities, and outcomes. This is a utility in and of itself, but it has the added benefit of helping enhance the legitimacy of any potential outcomes that may emerge.
There are two important limitations worth quickly noting here. First, IHR law is primarily articulated and understood as governing the behavior of States, while many of the important decisions being made around AI today are being made by non-state actors. While this is objectively true, its importance should not be overstated.
The IHR framework was never exclusively framed around States —for instance, the International Labor Organization, which predates the post-World War II international bill of human rights, operates in a tripartite manner that includes representatives of workers and employers, as well as states. Furthermore, the Universal Declaration of Human Rights, which is the foundation of the contemporary IHR framework, directs “every individual and every organ of society … [to] strive … to secure [the] universal and effective recognition and observance” of human rights (emphasis added). More recently, the UN Guiding Principles on Business and Human Rights have helpfully outlined and clarified the respective duties of States to protect human rights, and the related but distinct responsibilities of corporations to respect human rights in the course of their activities.
Relatedly, the scope of State obligations under IHR law continue to be interpreted primarily through the lens of “jurisdiction,” which is typically defined using frames such as “territory,” “nationality,” and “control,” which are being fundamentally challenged by the borderless nature of our information architecture. This presents a range of legitimate challenges that are germane to consideration of many aspects of our digital age, not only AI.
An Architecture for Convening, Deliberation, and Enforcement
Third, the IHR framework has developed a relatively elaborate architecture of institutions and processes (regional and international courts, UN specialized agencies, intergovernmental bodies, treaty bodies, special procedures, among others), which can be used to both facilitate consideration of these issues and, in some instances, monitor and even enforce implementation of any resulting outputs.
There has been much criticism of the UN’s human rights architecture of late, and after six years of working in the multilateral office at the human rights bureau in the United States Department of State, I am personally sympathetic to some of it. However, my experience has also taught me to appreciate that Human Rights Council resolutions and special procedures can have real normative impacts, and that mechanisms like the Universal Periodic Review can provide meaningful accountability in certain circumstances.
It is also important to acknowledge that the UN system is not the only human rights regime in town. Regional human rights systems have demonstrated these capabilities as well. And, even more importantly, there is governance innovation happening in the human rights space. Examples of what I have elsewhere referred to as “creative multilateral organizations,” include newer, issue-specific, multilateral organizations (like the Freedom Online Coalition), as well as formalized multi-stakeholder initiatives (like the Global Network Initiative), which are complementary to, but independent from UN and regional systems and can offer flexibility and workarounds.
A Roadmap and Moral Compass
Finally, and most ephemerally, human rights can provide an aspirational, positive roadmap that can help guide decision making, including the balancing of trade-offs. This is particularly important because grappling with disruptive and potentially dangerous forces that present complex collective action challenges, such as AI, can often constitute tremendously challenging and dispiriting work.
The IHR framework was put in place after the last World War in order to create a shared, global set of expectations related to human dignity and self-governance. The impetus for that project was in part to address the underlying challenges that were perceived to have fed that great conflagration and, as the Preamble to the UDHR puts it, “resulted in barbarous acts which have outraged the conscience of mankind.”
Infused with that collective mission and self-awareness, the contemporary IHR framework has both endured and evolved. It is not a panacea, but it has and can continue to underscore our shared humanity, provide a moral compass, and generate tenacity and purpose even in the face of daunting odds.
Jason Pielemeier is the Policy Director at the Global Network Initiative. Prior to joining GNI, Jason served as a Special Advisor at the US Department of State, where he led the Internet Freedom, Business, and Human Rights section in the Bureau of Democracy, Human Rights and Labor. In that role, Jason worked with colleagues across the US government, counterparts in other governments, and stakeholders around the world to protect human rights online and promote responsible business conduct.
The opinions expressed in this article are those of the author and do not necessarily reflect those of the Centre for Policy Research, United Nations University, or its partners. This post reflects his personal views and should not be attributed to GNI or any GNI members.
Suggested citation: "AI & Global Governance: The Advantages of Applying the International Human Rights Framework to Artificial Intelligence," UNU-CPR (blog), 2019-02-26, https://unu.edu/cpr/blog-post/ai-global-governance-advantages-applying-international-human-rights-framework.