Article

AI and International Relations — a Whole New Minefield to Navigate

Guiding the evolution of AI to achieve real benefits for humanity should begin by understanding AI within the context of international relations.

At the heart of international relations is how we connect with one another — as humans, as nations, as organizations and beyond. Dwight Eisenhower once said, “The world must learn to work together, or finally, it will not work at all.”  These words continue to hold significant weight.

Our current context, for instance, shows distressing discord. We find ourselves amid unprecedented turmoil where the world is convulsed in the throes of global conflict in various pockets of the Earth.

Indeed, the post-pandemic setting is one defined by war and destruction. Humanity stands on the precipice of an uncertain future. A harrowing panorama of destruction and despair has replaced the fragility of peace.

At the same time, our systems and structures are called to change. It is no secret that we need to catch up regarding the Sustainable Development Goals (SDGs), while artificial intelligence (AI) is gaining pace in a manner challenging to keep up with.

There are some certainties alongside an era of war and destruction. We are also entering the age of AI, and we must respond to that. The future depends on what we do today, and we have it within our powers to change the course.

This represents the root of international relations (IR) — a field that AI will increasingly define. We must begin by embracing diverse thought and challenging our perspectives.

Eight years ago, United Nations Member States adopted the SDGs. This was a comprehensive reframing and reworking of the previous Millennium Development Goals (MDGs). Sustainable development approaches are long-term plans that seek to meet the requirements of the present without compromising resources and possibilities for future generations.

We are now at the midpoint towards the envisioned just, equitable and more sustainable future. And progress could be faster. In fact, across many metrics, we have completely stalled. Although this is partly a legacy of the COVID-19 pandemic, I am mindful that we were off track before this.

The 2023 SDG Report warns that the promises enshrined in the SDGs are in peril. Although the slowdown in progress towards the SDGs affects all nations, it disproportionately impacts poorer countries because of their limited representation on the global platform. Once more, the growing economic disparity between developed and developing countries, coupled with the unequal effects of the climate crisis, is a significant worry.

UN Secretary-General António Guterres recently sounded a warning. “Unless we act now, the 2030 Agenda will become an epitaph for a world that might have been.” The world is now witnessing a rise in extreme poverty, a trend not seen in a generation, with projections indicating that 575 million people could still be living in extreme poverty by 2030. We are running out of time, and there is a renewed urgency to act — AI could be our great definer.

Defining strengths and pitfalls

What are the benefits of AI? How do we govern this constantly evolving and emerging field? How do we centre humanity in the shift? We must begin by understanding AI within the context of international relations.

Firstly, how do we ensure that AI development and adoption emphasizes transparency? Here, we must consider how we can provide meaningful insights into AI decision-making processes. AI is about making machines that think, act and interact. In many ways, AI replicates human intelligence through computer machines.

The challenge of transparency in AI lies in the complexity of algorithms and the opacity of decision-making processes. Do we understand these systems? In the Handbook of Machine Learning, for instance, the complexities of these technologies are outlined in depth.

As the book demonstrates, there are three broad types of AI — prediction machines, clustering machines and generative machines.

Prediction machines forecast future outcomes or make predictions based on historical data. Clustering machines group similar data points based on specific characteristics. Generative machines, including ChatGPT, create new content, such as images, text, or even music, that resemble human-created content based on algorithms that learn patterns from existing data.

Understanding how this technology works is an essential step in tackling this challenge. Transparent AI systems are essential for building user trust and ensuring accountability.

Second, it is imperative to consider cross-border data flow to promote unrestricted data movement across borders, fostering global collaboration and innovation. This promotes education, economic development, health care, innovation, and environmental sustainability.

Thirdly, what are the applications of AI? The book Hamiltonian Monte Carlo Methods in Machine Learning presents advanced methods with applications in renewable energy, finance and image classification for the biomedical sector.

Fourth, what is the impact of AI on political outcomes? In 2019, a video of Nancy Pelosi, the then-US House of Representatives speaker, was deliberately slowed by 25% to modify the pitch to make it appear like she was slurring her words. The video is an example of a deepfake. Deepfakes use deep learning technology — a branch of machine learning able to alter data sets — to create a fake because the technology has learned what a face looks like at different angles and is thus able to transpose a face onto a target as if it were a mask.

Doctored videos like this are available online, and their use in politics presents a disturbing reality. How do we distinguish the fake from the genuine? Could deepfakes be used for more nefarious purposes? Consider the implications of a deepfake video of a head of state announcing that a country would be launching a nuclear attack on another nation.

As these generative AI systems continue to advance, we must also consider the ethics challenge. The technology can be misused to create deepfake content, manipulate information, or infringe upon privacy and consent. Addressing these ethical dilemmas requires stringent guidelines.

Fifth, data challenges in the context of AI encompass various issues related to collection, quality, privacy, and biased data. Addressing these challenges is essential for developing ethical AI systems. The book Deep Learning and Missing Data in Engineering Systems explores advanced computer techniques to generate synthetic data to deal with quality, privacy, and biased data.

Data is the bedrock of AI, and we must take the challenges surrounding it very seriously. As we consider quality data that is inclusive, we must have a thorough understanding of synthetic data. Embracing synthetic data in tandem with real data allows for the development of robust and ethically sound AI systems, fostering innovation while respecting privacy concerns.

Sixth, it is vital to consider the impact of AI on economics. AI technologies are poised to disrupt the job market. These technologies enhance efficiency, optimize supply chains, and facilitate automation, increasing productivity and competitiveness for nations engaging in international trade. The integration of AI in industries such as logistics, manufacturing, and services reshapes traditional trade patterns.

However, we must also be mindful of the challenges that may emerge, including the need for standardized regulations, addressing ethical concerns, and ensuring fair access to AI benefits, particularly as the intersection of AI and cross-border trade becomes a critical factor in shaping the future of the global economy.

Seventh, there is the challenge of the truth. AI is also learning when to deceive. In 2007, alongside Evan Hurwitz, we built software agents that can bluff playing a game of poker. While this is a light-hearted example, there are certainly some sinister connotations. When exposed to large datasets, machine learning algorithms can incorporate biases and deceptive patterns in the data.

As AI evolves, understanding and mitigating the risks associated with deceptive behaviours become paramount. This calls for vigilant oversight and responsible development to harness these powerful tools for the greater good without compromising the fundamental values of truth and integrity.

Eighth, there is the challenge of wars and the changing face of conflict. AI can be harnessed for military applications, enhancing strategic decision-making, surveillance, and even autonomous weaponry. This raises profound ethical and security concerns.

For example, the use of AI-driven drones, cyberattacks orchestrated by intelligent algorithms, and the potential for autonomous weapons to make life-and-death decisions blurs the lines of accountability and raises questions about the humane conduct of war. Moreover, the prospect of AI falling into the wrong hands or being manipulated for malicious purposes adds a layer of complexity to global security.

What is required are robust international agreements and ethical guidelines, but also continuous vigilance to ensure that AI technologies are developed and deployed responsibly and in a manner that safeguards human lives.

In a 2011 book titled Militarized Conflict Modeling Using Computational Intelligence, we researched how AI could help us understand wars. We modelled the relationship between key variables and the risk of conflict between two countries. We established a framework on how these models can be used to achieve peace. This is an example of how AI can mitigate conflict.

Related to this is the ninth challenge, which is about human rights. AI has been applied to predict and avoid interstate conflict, challenge climate change, spur a technology revolution, facilitate trade, safeguard human rights and outline and improve the international financial architecture.

Moreover, training large AI models can be energy-intensive, contributing to environmental concerns. Finding ways to make AI development and usage more environmentally sustainable is essential.

As this challenge indicates, even as we readily embrace AI as a solution and a fundamental tenet of international relations, we must do so ethically and with humanity in mind.

As much of our research indicates, collaboration between AI systems and human oversight will have to converge to ensure the responsible and ethical development and deployment of AI technologies. Many AI systems are trained on biased data, leading to discriminatory outcomes. Addressing bias and ensuring fairness in AI algorithms is an ongoing challenge.

We must ensure that these systems are inclusive and based on equitable structures and systems. Admittedly, finding the right balance between innovation and regulation to ensure public safety and the ethical use of AI is a continuous struggle. In this regard, it is necessary to develop governance structures that can adapt to the evolving landscape of AI technology.

In conclusion, as Henry Kissinger said, “The greatest need of the contemporary international system is an agreed concept of order.” AI may very well be the key to this agreement. 

This article was first published by Daily Maverick. Read the original article on the Daily Maverick website.

Suggested citation: Marwala Tshilidzi. "AI and International Relations — a Whole New Minefield to Navigate," United Nations University, UNU Centre, 2023-11-23, https://unu.edu/article/ai-and-international-relations-whole-new-minefield-navigate.

Related content