Blog Post

From an AI Revolution to Resolution: Four Gaps and Four Promises of Capacity Development in AI Governance

AI governance needs global capacity building to close gaps in access literacy science and policy and ensure safe and fair use for all

One year after the launch of the Global Digital Compact, we find ourselves at a checkpoint in AI governance. Last week, the United Nations Security Council organised a high-level debate on AI and international peace and security, and the United Nations General Assembly conducted a high-level multistakeholder informal meeting to launch the Global Dialogue on Artificial Intelligence Governance. Secretary General António Guterres conveyed, "For the first time, every country will have a seat at the table of AI”. To ensure meaningful participation and level the playing field, capacity has been identified as one of three central pillars of cooperation in AI, putting people and global talent as integral to the future of AI governance.  Secretary General António Guterres emphasised on closing the AI capacity gap during the debate. As technology progresses, new digital divides emerge, including one of capacity, knowledge, and skills. While the promises and perils of AI dominate these deliberations, building capacity among policymakers can mitigate some of these challenges and help harness the promises of AI in 4x4 ways.  

The four gaps  

First, I argue that the capacity gap in AI governance is not one, but four different gaps. One of the most visible divides is in conduits, infrastructure, access to data centres, and training data, an extension of an already pressing digital divide in ICTs. Closing this gap could improve representativeness and inclusion in Global AI, reduce market and political concentration, and include more actors in AI innovation, but not unless the other three types of gaps are closed.  

Among these is a gap in AI literacy, which is fundamentally a form of digital literacy and with far-reaching consequences. As AI tech enters the lives of citizens who interact with the technology in diverse ways, including as users, consumers and producers of AI and AI-generated content, closing the AI literacy gap should be a principal goal, so individuals can effectively and safely engage with new and emerging technologies.    

A third gap is the scientific gap. For everyone to be able to sit at the table, we must address this issue, which has deepened market concentration of AI in the hands of the few. The gap in technical talent and knowledge generation can spur innovation and entrepreneurship and improve the use of data, produce research, and support education for the next generation of innovators. Technical resources are found not only in universities and research centres, but also in the public and private sectors. Supporting incubators and encouraging research and cooperation can help harness the power of AI.  

Finally, and most crucially, AI capacity among policymakers is needed to understand AI, coordinate technology transfers and technical change, recognise, understand, and address its impacts across a spectrum of social, economic, and environmental concerns, and mitigate risks. Closing the AI capacity gap in governance can support a deeper understanding of the other four and is central towards ensuring what I refer to as the four promises of building capacity in AI governance. For instance, knowledgeable policymakers must address gaps in AI literacy to ensure that the benefits of AI accrue to all.  

The four promises  

One of the central premises of my argument is that capacity development in AI can foster good governance, promote equitable participation in global dialogue and beyond, manage broader societal risks and encourage innovation in AI.    

I reflect on Prof. Yejin Choi’s opening remarks in the debate with the call to prioritise fellowships and exchanges that connect researchers across borders, promote cutting-edge skills and long-term partnerships. For such technology transfers to occur, ones that spur innovation and generate opportunity for the national and regional actors, policymakers need to foster collaboration for R&D and innovation, thereby strengthening firms' absorptive capacity and dynamic capabilities. To reach those goals, we need to know how innovation works.  

The governance of emerging technologies is another challenge. Governing digital technologies in the past was marked by fragmentation. Polycentric governance of digital technologies can foster greater multistakeholder participation. Still, at the same time, when a technology possesses such acute human and environmental risk, it requires a shared vision. One of the main recommendations of the Governing AI for Humanity report is for an AI capacity development network, which includes building the capacities of public officials. Even if every member has a voice in this dialogue, ensuring representatives and meaningful participation is an outcome of capacity, and at all levels of government.  

Prof. Yoshua Bengio opened the debate with an emphasis on AI and its risks for humanity. Discussions over the two days concerned AI and its potential to erode trust through disinformation, threaten peace from autonomous weapons, and targeted campaigns undermining security. While the alarm bells may ring, so does the call to increase global research efforts towards safe and trustworthy AI and at the same time, for governments to oversee a rapidly changing and volatile technology landscape. During the debate, Nataša Pirc Musar, president of Slovenia, mentioned that decision-making must be guided by international law and humanitarian law simultaneously. Whether AI is embedded in human rights frameworks, inclusive, fair, safe, sustainable and trustworthy is a matter of how governments act and their capacity to respond to these threats.  

There is another aspect of capacity development that has not been talked about enough: to manage the impact of AI, stakeholders need to understand the impact of AI. According to Nobel laureate Daron Acemoglu, one likely consequence of the current trajectory of AI is its potential to sustain and deepen pre-existing inequities and disempower workers. While we have spoken about ‘Skills in AI’, there is a whole other discussion on ‘AI and Skills’ and how it is shaping the global economic landscape and changing the economy and society in unprecedented ways. Studying and recognising the impact of new and emerging technologies requires skilled professionals with the capacity to evaluate these impacts and for policy to respond in an appropriate - and timely manner.  

To learn more about this topic, apply to UNU-MERIT's online ITU Academy course 'Public Policy Evaluation for Digital Transformation' before 20 October 2025. Designed especially for practitioners, government officials and policymakers from developing countries, the course features Praachi Kumar as one of the instructors.

Also open for enrolment:

Suggested citation: Kumar Praachi., "From an AI Revolution to Resolution: Four Gaps and Four Promises of Capacity Development in AI Governance," UNU-MERIT (blog), 2025-10-07, 2025, https://unu.edu/merit/blog-post/ai-revolution-resolution-four-gaps-and-four-promises-capacity-development-ai.

Related content