Blog Post

AI & Global Governance: When Autonomous Weapons Meet Diplomacy

New technologies could dramatically change how war is waged. We need to take AI seriously in diplomatic settings.

Date Published
21 Aug 2019
Author
Eugenio V. Garcia

By all accounts, the long-term impact of artificial intelligence (AI) on international relations may be unprecedented. A fierce great-power competition is underway for digital and tech supremacy. The rush to master this technology has the potential to dramatically change the way war is waged in the twenty-first century. As strong short-term economic incentives prompt technological advances in civilian application, ongoing military projects seek to pave the way for a decisive strategic advantage over rivals. Before the situation becomes unmanageable, we need to take AI governance seriously in diplomatic settings.

Whither autonomous weapons?

Mechanical systems are fallible, resulting in technical problems that often require human intervention, who themselves can obviously make mistakes. Malfunctions, failures, and errors in software coding can have disastrous consequences in combat situations. Systems can undergo hacking and cyber-attacks or get out of control. Autonomous weapons may fall into hostile hands and be appropriated by non-state actors, designated terrorist groups, or extremist groups persecuting minorities. From the perspective of international humanitarian law (IHL), machines would hardly grasp the context with reasonable confidence to see the big picture in real-life conditions and decide for themselves about proportionality, distinction, or necessity, as this assessment usually implies a broader political judgement.

It is true that military powers are planning to invest heavily in AI. Autonomous weapons promise gains in efficiency, economy, and tactical superiority. They could operate in inaccessible or inhospitable areas and perform missions that are too dangerous for human combatants, thus reducing casualties in high-risk operations. These systems, it is argued, could deactivate bombs, clear mined land, make complex rescue missions, enter caves and penetrate deep into enemy territories. Costs tend to be lower. Mini-drones are a case in point: if deployed in their hundreds and loaded with explosives, swarms may prove extremely difficult to stop.

If not regulated at all, fully autonomous weapons will become increasingly sophisticated. Equipped with advanced AI, smart weapons that are supposed to strictly follow their programming may exacerbate uncertainty due to their lack of flexibility when faced with changing circumstances. Their startling speed could provoke unwanted confrontations in case of false alarms, mistakes, or accidents. How would the operator be able to recall them if, for instance, the commander wishes to abort an attack? Cancelling missions would possibly be conditional to the existence of a human in-the-loop. Also, it will be very hard to predict what a self-learning AI program can do on the ground if left to its own devices. There are multiples examples of algorithms that, trained in deep learning, arrive at surprising conclusions, which not even their creators could have anticipated.

A dangerous arms race could be accelerated and trigger indiscriminate proliferation. The most threatening long-term risk would be the loss of human control over the use of force. Even if robots could be more ‘accurate’ and therefore save lives of noncombatants, delegating the moral burden of war to machines would entail a serious renunciation. Ultimately, this dehumanizing process could lead to a banalization of warfare and increase the likelihood of resorting to force to settle disputes.

People are at the center of IHL. The existence of machines cannot call into question the integrity of the long-established doctrine of IHL. The responsibility for legal obligations by states must not be transferred to inanimate beings. By the same token, human rights are of paramount importance, in particular the right to life. Robots are not moral agents nor do they possess compassion, empathy, or intuition, all qualities inherent in the human being. An algorithm alone should not make life and death decisions.

AI can help people live a fulfilling, dignified, and healthy life. Our duty is to ensure that benefits are disseminated properly, preventing abuses whenever possible, adopting regulations if needed, and setting the ethical boundaries that public conscience requires.

The GGE process in Geneva

States have been discussing these matters under the United Nations Convention on Certain Conventional Weapons (CCW) since 2014. A Group of Governmental Experts (GGE) on lethal autonomous weapons systems met for the first time in 2017. Decisions must be taken by consensus and progress has been slow so far. Disagreements persist as to definitions, methodology, and the scope expected for negotiations, among other ‘red lines’. There is no common view on the desirable range of any prohibitions to be imposed on autonomous weapons. Advanced military powers remain circumspect about introducing severe restrictions on the use of these technologies.

Although a clear roadmap for negotiations is still missing, GGE recommendations highlight the fact that options for addressing the challenges posed by emerging technologies in the area of lethal autonomous weapons systems should proceed without prejudging policy outcomes. States agree that, while acknowledging their dual nature and rapid development, progress in the AI industry and civilian applications should not be hindered. There is also convergence of thought that autonomous weapons must abide by the international law of armed conflict.

Most countries advocate the need for meaningful human control, especially in selecting and attacking targets. Should we leave AI unconstrained or search for alternatives, such as a moratorium or a preemptive ban? Some claim a moratorium would be preferable to banning right away because it could give a ‘pause’, while states consider what the technology will look like in years to come. Many civil society organizations, though, are actively pushing for a ban, including the Future of Life Institute and the Campaign to Stop Killer Robots.

As illustrated by the case of chemical weapons, used in armed conflicts in the past but now stigmatized and condemned by the international community, political positions are not immutable. Strategic interests can evolve and bring about shifts previously thought to be inconceivable. A few outcomes could arise out of the GGE, which arguably can just continue deliberations and postpone any committal decision. The possibility of a political declaration has been flagged, to be issued by all states or some of them, or a code of conduct compiling rules and principles applicable to autonomous weapons. A mandate can also be given for states to negotiate in earnest, so that meetings in Geneva may be conducive to a legally-binding instrument to ensure meaningful human control over critical functions in lethal autonomous weapons systems.

Expectations are high for the GGE to deliver concrete results in a timely manner. Will people ever be truly safe in a world where machines use lethal force with no human control or intervention? AI ​​defense technology will continue to advance, faster than ever, even if seemingly far from the eyes of the public, particularly those in the Global South. It is crucial for all states to engage in this conversation, discuss it thoroughly, and think ahead if we do wish to have a say in the direction this technology is taking.

 


Eugenio V. Garcia is Senior Adviser and team leader on peace and security, humanitarian and legal affairs at the Office of the President of the United Nations General Assembly, New York, having been a career diplomat with the Ministry of Foreign Affairs of Brazil. He holds a PhD in History of International Relations from the University of Brasilia and is an active researcher on international security, new technologies, and AI governance.

The opinions expressed in this article are those of the author and do not necessarily reflect those of the Centre for Policy Research, United Nations University, or its partners.

Suggested citation: Eugenio V. Garcia., "AI & Global Governance: When Autonomous Weapons Meet Diplomacy," UNU-CPR (blog), 2019-08-21, https://unu.edu/cpr/blog-post/ai-global-governance-when-autonomous-weapons-meet-diplomacy.

Related content