Blog Post

AI & Global Governance Platform: How Should UN Agencies Respond to AI and Big Data?

The three forces shaping United Nations approaches to Artificial Intelligence and big data.

Three forces are shaping United Nations (UN) approaches to Artificial Intelligence (AI) and big data: the broad mission of the UN and the specific mission of each UN agency; the rapid emergence of new technologies; and the political narratives that frame AI and big data. By analyzing how these three forces combine, align, contradict and potentially undermine themselves and one another, UN agencies can develop guidelines and strategies to determine which (if any) AI and big data technologies serve their specific missions.

What does this look like in practice? We illustrate this by considering how these three forces might shape a UNAIDS respond to HIV/AIDS.

The Three Forces and the UNAIDS HIV Response

Three forces are intertwined in the UNAIDS HIV response: the UNAIDS mission to respond to HIV effectively, technological advances and their availability, and political narratives that frame AI and its uptake.

The first force is the target of UNAIDS to achieve zero new HIV infections, zero discrimination and zero AIDS-related deaths. UNAIDS also aims to speak out with, and for, the people most affected by HIV in defense of human dignity, human rights, and gender equality, in line with the overall UN mission.

The second force is the rapid emergence of new technologies in AI and big data to assist HIV diagnosis, treatment and prevention. For example, a prototype home testing device that attaches to a smartphone can detect HIV status in 15 minutes. While this device promises at least as accurate an HIV diagnosis as currently available HIV home tests, its connection to a smartphone raises issues around informed consent, privacy, and data storage. Does this hi-tech test actually benefit people living with HIV more than existing low-tech tests, or does it put them at greater risk than off-line testing because of how their data might be used, including their HIV status? This is a question UNAIDS must consider as it assesses the potential use of AI and big data in its HIV response.

The third force concerns the political narratives and framing of AI and big data, and their impact on the HIV response. Three AI narratives dominate contemporary discussions: the dystopian account of AI-driven by fear, the ethical account of AI-driven by hope, and the entrepreneurial account of AI-driven by the desire for freedom from both state regulation and individuals’ full and sustained ownership and control of their personal data. These three accounts compete and combine at different levels of strategic planning and policy-making in the UN, affecting how UNAIDS positions itself and pitches the idea of using AI to stakeholders to end the HIV/AIDS epidemic as a public health threat in its advice to governments.

For example, a 2018 “Artificial Intelligence for Health” workshop organized by the International Telecommunications Union (ITU) and the World Health Organization (WTO) was framed around an ethical narrative driven by the hope that AI would be made safe for the greater human good, and help international organizations, governments and civil society to achieve the Sustainable Development Goals (SDGs) and a better life for all. Yet some participants were more aligned with an entrepreneurial freedom narrative, which often puts profiting from users’ data over user needs and protections, putting users at greater risk. Because UNAIDS wishes to protect people seeking HIV prevention, care and treatment services, UNAIDS needs to be aware of how some AI narratives might compromise its objective. This may require UNAIDS to expand its understanding of what it means to protect humans to include protecting data about humans.

How these three forces combine around specific UNAIDS policies, new technologies, and political narratives is different in every case. By keeping these three forces in mind, UNAIDS staff will be better equipped to access the benefits and risks of emerging technologies. This will better empower them to uphold the UNAIDS mission in ways that preserve the dignity, security, and human rights of people living with HIV.

How Should other UN Agencies Respond to AI and Big Data?

Analysis of how the three forces combine around specific missions, technologies, and political narratives is vital for any UN agency. In this context, we offer three additional recommendations:

  1. The UN commitment to a human-centered and rights-based approach should guide UN policy into the 21st To do so, UN agencies must be aware of how AI and big data can undermine privacy and informed consent as well as cause unfair, biased and discriminative outcomes through opaque processes of AI-driven identification, profiling and automated decision-making.

  2. All UN agencies should debate and discuss these issues, both internally and externally, to push for new policies and regulatory measures that are guided by the overall UN mission and by the agency’s specific mission. UN agencies need to establish their own policies which ensure that all decision-making within their agency remains centred on human rights and civil liberties in this new era.

  3. In a UN context of hope that often emphasizes the benefits of “AI for Good” to achieve SDGs, UN agencies should acknowledge and address the risks of AI and big data for their missions that follow from often-overlooked or de-emphasized fear and freedom narratives, which may endanger the human rights and civil liberties of the key populations each UN agency serves. That can’t be solved by technological standardization alone.

Where does this leave UN Agencies?

AI and big data promise to revolutionize healthcare around the globe. That revolution might mean harnessing ‘AI for Good’ and help the UN to achieve its SDGs. But each specific application of AI and big data carries its own specific risks as well. UN agencies need to consider the tradeoffs between: the promised benefits and potential risks of each specific new technology they seek to use or recommend; that technology’s role in the policy objectives the agency hopes to achieve, and; what the agency can do and should do to limit potential violations of human rights and civil liberties if it were to employ or recommend a specific AI and big data technology. Crucially, UN agencies need to realize that risks posed by AI and big data to the key populations each agency serves could become risks to their missions and to the UN mission more broadly. Attention to how the three forces combine, align, contradict and potentially undermine themselves and one another will help UN agencies achieve these aims.


This article was written by Jolene Yiqiao Kong, Richard Burzynski and Cynthia Weber as a contributor to AI & Global Governance. The opinions expressed in this article are those of the author and do not necessarily reflect those of the Centre for Policy Research, United Nations University, or its partners. This piece is a personal opinion of the authors and does not represent the official views or position of UNAIDS.

Jolene Yiqiao Kong is a former intern at UNAIDS and MA student at Geneva Graduate Institute, Richard Burzynski is a Senior Advisor at UNAIDS, and Cynthia Weber is a Professor of International Relations at Sussex University. This piece was published in collaboration with The Global from the Global Governance Centre at the Graduate Institute Geneva.

 

Suggested citation: Jolene Yiqiao Kong, Richard Burzynski and Cynthia Weber., "AI & Global Governance Platform: How Should UN Agencies Respond to AI and Big Data?," UNU-CPR (blog), 2019-09-04, https://unu.edu/cpr/blog-post/ai-global-governance-platform-how-should-un-agencies-respond-ai-and-big-data.

Related content