No one should feel comfortable about the current state of the world. The greatest issues of our time — conflict, climate change and inequality — had staggering escalations in 2024, reaching veritable boiling points around the globe.
To overcome these intensifying challenges, the tendency is to search for swift solutions using the latest technologies. In 2024, we saw artificial intelligence vault to the top of these ambitions.
Although this technology is not yet the panacea some people believe it is, advances have reached a feverish pace. This has pushed AI into the vernacular as the great new hope for everything from climate change mitigation and food security to peacebuilding and education.
But if we are to shape this technology to become a meaningful and effective catalyst for human progress and sustainable development, we must ensure that our governance of AI is balanced and imbued with shared values.
Last September, world leaders met at the United Nations in New York, where they adopted the Pact for the Future. This framework reaffirms a commitment to global progress grounded in our common humanity and is built on the pillars of sustainable development, peace and security, and human rights. In a year plagued by escalating and drawn-out conflict, the Pact is a vivid reminder that we should remain steadfast in our focus on the well-being of current and future generations.
Too often, the hard work necessary to find human-centred solutions is supplanted by a desire for fast thinking and technological fixes.
One of the Pact’s key annexes is the Global Digital Compact — a set of principles, objectives and actions focused on advancing an open, secure and human-centered digital future for all. Crucially, the Compact is rooted in universal human rights and its immediate attention is directed at achieving the Sustainable Development Goals.
Too often, the hard work necessary to find human-centred solutions is supplanted by a desire for fast thinking and technological fixes. For 75 years, however, the UN has worked to balance the demand to react rapidly in the face of crises with measured responses that achieve durable solutions. This ensures that member states remain aligned with the UN Charter’s determination “to reaffirm faith in fundamental human rights, in the dignity and worth of the human person, in the equal rights of men and women and of nations large and small.”
The Global Digital Compact is also the first universal agreement on the international governance of AI. It aligns nations to realize the technology’s potential and manage its risks, including military applications, through enhanced cross-border cooperation, stakeholder engagement and the targeted promotion of an inclusive, responsible and sustainable digital future.
The pursuit of balance in AI governance will, however, remain a challenge. As we look ahead to 2025 and begin implementing the Compact’s commitments, there are three key areas to focus on.
First, we must ensure governance addresses the need for different rates of decision-making. Behavioral psychologist Daniel Kahneman, winner of the Nobel prize in economics and author of the well-known book “Thinking, Fast and Slow”, explored this balance by conceptualizing system 1 (faster and instinctive) and system 2 (slower and methodical) thinking.
For example, in health care, AI holds great potential to help assess medical imaging through fast analysis in emergency situations and time-sensitive surgical applications. But health care also requires slower, subjective judgments grounded in the relationship between doctor and patient, and in how patients experience illness and respond to treatment.
For AI to be useful in both scenarios, its development should be receptive to human insights and facilitate both system 1 and system 2 thinking.
We need a far more aspirational strategy than just meeting a minimum standard. We must approach AI governance with the same future-focused rigour and elements we rely on for multilateralism.
Second, AI governance must strive for more than just to “satisfice” — a blend of “satisfy” and “suffice” — immediate needs. This would see stakeholders settle for governance mechanisms that are merely “good enough” for the challenge at hand. The problem with satisficing is that AI and its implications are immensely complex and iterative, involving a nuanced collection of stakeholders and sectors.
We need a far more aspirational strategy than just meeting a minimum standard. We must approach AI governance with the same future-focused rigour and elements we rely on for multilateralism — trust, consultation, collective decision-making, compromise and solidarity.
As UN Secretary-General António Guterres reminds us, “multilateral cooperation is the beating heart of the United Nations”. This notion should inspire us to do more than satisfice for short-term goals. Together, we need to put in the hard work to build AI governance focused on our shared present and a better shared future.
Third, AI governance must balance global and local contexts. A significant challenge for regulation is that it impacts many sectors, countries, regions and populations. While stakeholders might agree in principle to the vital importance of human rights, access to technology and equitable economic development, the reality is that AI evolves in an environment of fierce market competition, proprietary product technologies and legal frameworks whose pace of adaptation is slower than that of technological change.
As a result, it is extremely difficult to achieve one-size-fits-all AI governance. We can use global regulation to foster international collaboration and help mitigate AI-related cross-border data risks such as privacy breaches and economic fragmentation. And local governance will be essential in helping us align national interests and cultural values.
The fulcrum of AI governance — that is, its balancing point — must be UN values, centred on the Universal Declaration of Human Rights.
The Global Digital Compact serves as a reference and reminder for “the need for science, technology and innovation to be adapted and made relevant to local needs and circumstances”. It also outlines a commitment to “govern artificial intelligence in the public interest and ensure that the application of artificial intelligence fosters diverse cultures and languages and supports locally generated data for the benefit of countries and communities’ development”.
In the words of former UN Secretary-General Kofi Annan, “Let us invest in and embrace technology; it makes progress possible. But let us not forget that technology by itself cannot absolve us of our political responsibility to ensure that we use it wisely and efficiently for the good of society everywhere.”
Therefore, finally, the fulcrum of AI governance — that is, its balancing point — must be UN values, centred on the Universal Declaration of Human Rights. The Global Digital Compact sets us squarely on that path through its commitment to human rights and international law in the digital space, so that users can benefit from technological advances while being protected from abuse.
This article was first published by The Japan Times. Read the original article on The Japan Times website.
Suggested citation: Marwala Tshilidzi. "To Serve Humankind, AI Must Be Shaped By UN Values," United Nations University, UNU Centre, 2025-01-07, https://unu.edu/article/serve-humankind-ai-must-be-shaped-un-values.