Article

We Need Good Governance to Shape AI for Good

To successfully govern AI for the benefit of all, we need our approach to be as dynamic, innovative and creative as the pursuit of AI itself.

In the early years of artificial intelligence research, AI technologies were mostly regulated by the limitations of computing power. Data processing was painfully slow by today’s standards and a relatively small group of computer scientists were creating and using nascent AI applications.  

Decades later, we have entered a new era where governments and every major technology company are focused on leveraging AI to transform our daily lives and livelihoods. Powerful artificial intelligence tools are now at our fingertips.

But even among well-intentioned initiatives to use AI for data-driven decision making, increased economic productivity and health care automation, the risk of negative consequences is mounting. As technical boundaries shrink, we face a growing urgency to close governance gaps so that we can steer this technology towards equitable and sustainable development — in other words, shape AI for good.

This week, the International Telecommunications Union (the United Nations specialized agency for digital technology) will host the AI for Good Global Summit in Geneva. Held annually since 2017, the summit is the leading forum to explore how AI can help accelerate efforts to achieve the UN Sustainable Development Goals and support long-term action to overcome global development challenges.

AI Governance Day” will kick off this year’s discussions. However, as we consider the global field of AI stakeholders, explore the many dimensions of AI research and applications, and grapple with how to bridge the AI divide between the Global South and Global North, we are confronted with daunting complexity.

AI for Good Summit 2023
AI Governance Day will kick off the AI for Good Summit 2024 in Geneva. Photo: ITU / Rowan Farrell / CC BY-NC-SA 2.0

Over the past year, the United Nations University (UNU) and the UN system have been especially committed to assessing that complexity to help guide governance.

The UNU Global AI Network was launched last month in Macau. With more than 40 founding members from the private sector, academia, governments and civil society organizations, the network will collaborate to produce evidence-based policy recommendations that focus on boosting the benefits of AI for the Global South.

This builds on UNU’s research contributions to the U.N. Secretary-General's High-level Advisory Body on AI, which is developing international governance recommendations. Following its interim report and a period of extensive consultations and public input, the advisory body will publish its final report in the middle of this year.  

The report will provide vital guidance when UN Member States, including Japan, gather in September at the Summit of the Future. There they will agree on the Global Digital Compact, namely “shared principles for an open, free and secure digital future for all.”

Also, in March, the UN General Assembly adopted a landmark resolution to “promote safe, secure and trustworthy artificial intelligence systems” for sustainable development. When presenting the draft resolution, United States Ambassador to the UN Linda Thomas-Greenfield emphasized that “we have the opportunity and the responsibility to choose as one united global community to govern this technology rather than let it govern us."

A major challenge of governing AI as a global community, however, is that we need to balance competing national interests while shaping a shared vision for these technologies’ trajectory — technologies that are largely developed and controlled by a small group of countries, primarily the US and China.  

The collective benefits of artificial intelligence should never supersede the protection of individual rights.

This is why we must work diligently to create a thorough foundation for AI governance that addresses inequalities in technology access and differences in socioeconomic contexts. That foundation — a framework for good governance of AI — should emphasize three points.  

First, we must root AI governance in sound, shared values such as transparency, truth and privacy. Transparency fosters trust and accountability among stakeholders and opens technologies to be evaluated fairly to ensure ethical and legal compliance.  

This is especially important in sectors such as transportation and health care, where public safety depends on predictable and reliable AI behavior.  

Truth is essential to create accurate and unbiased AI systems, to prevent misinformation in science, education and media, and to strengthen the social trust needed to ensure the adoption of AI for good. Privacy is paramount to safeguarding the vast amounts of personal data that novel systems will increasingly collect.

Importantly, the collective benefits of artificial intelligence should never supersede the protection of individual rights. As such, if we align our AI governance values with the Universal Declaration of Human Rights and the UN Charter, we are on a technological path that supports equality and human dignity, while connecting AI to the cooperative pursuit of peace, security and justice.  

Second, these values should underpin a hierarchy for action. This hierarchy does not entail a top-down approach but comprises layered actions that influence the positive, upward development of AI for good and the laws that govern it.  

For example, if we want to promote accurate public health education, then we also need mechanisms (or actions) to incentivize the development of AI technologies that counter the spread of misinformation. If our goal is to create global privacy standards for the data used to train AI algorithms, then we need international institutional structures to set and review those standards.  

For good governance of AI, we need our approach to be as dynamic, innovative and creative as the pursuit of AI itself.

Third, as we work to identify the most critical areas for artificial intelligence governance, we should prioritize the main links in the chain of technological development and use — data, algorithms, computing and applications.

Data governance allows for oversight of data collection, storage, analysis and cross-border flows while protecting and preventing misuse of sensitive data. Algorithmic governance is essential to ensure AI judgments are free of discrimination and bias, and that they accurately and safely inform applications used for sectors such as banking, law enforcement and health care.  

From cloud computing and essential infrastructure to data centers’ high levels of water consumption and the impact on labour markets, there is vast potential for AI to disrupt livelihoods, economies and ecosystems. By governing computing and applications, we can address the overlapping technical, social, economic and political implications of AI that, if overlooked or ignored, could make or break our efforts to advance sustainable development.  

Finally, as we work to shape AI for good, we must remember that artificial intelligence will always be a dynamic frontier of innovation with the potential for negative impacts and unforeseen setbacks.  

“Artificial intelligence is not a substitute for human intelligence; it is a tool to amplify human creativity and ingenuity,” as computer scientist Fei-Fei Li has highlighted.  

For good governance, we need our approach to be as dynamic, innovative and creative as the pursuit of AI itself.  

This article was first published by The Japan Times. Read the original article on The Japan Times website.  

Suggested citation: Marwala Tshilidzi. "We Need Good Governance to Shape AI for Good," United Nations University, UNU Centre, 2024-05-29, https://unu.edu/article/we-need-good-governance-shape-ai-good.

Related content

Project

Health Online Service Provision Index (HOSPI) Network

Enhancing global health service delivery through UNU-EGOV's HOSPI methodology, evaluating hospital portals to increase attractiveness, efficiency, and patient responsiveness.

09 Jan 2023

Media Coverage

South Korea Wants the World to Wake Up to the Serious Threats of Cybersecurity

In a new PassBlue UNSCripted Podcast episode, Eduardo Albrecht contextualizes South Korea's increasing focus on cybersecurity.

13 Jun 2024