Blog Post

Artificial intelligence and deliberation: an argument with machines

How is AI shaping the deliberation process and what can be done to ensure this technology supports rather than undermines human judgement?

The international community is urgently exploring how global governance can more effectively address escalating conflicts and crises, exacerbated by a rapidly evolving digital landscape. September saw the adoption of the Pact for the Future, and one of its two annexes, the Global Digital Compact (GDC), introduces a comprehensive global framework to enhance digital cooperation and AI governance; a global commitment to improving the governance of increasing, interconnected risks – including the development of ethical and human-centred artificial intelligence (AI) and data privacy regulations. 

In the lead up to the Summit of the Future, global stakeholder deliberations explored how to unlock the potential of AI for all and reform decision-making in global governance frameworks. However, it’s important that we examine how AI is shaping the deliberative process itself. This technology can both enhance and complicate deliberation, and thoughtful governance is needed to ensure that technology serves as a supportive tool for human judgment rather than a disruptive force. 

AI as a tool for deliberation

AI is already being used as a deliberative tool in military and political contexts, raising questions over our trust in human judgement, the presence of accountability and the potential for adversaries to manipulate discourse. Along with these questions, a major concern is AI’s capacity to generate disinformation.

AI-generated disinformation is reshaping conflict dynamics, particularly in conflict prone areas, as analysed in a UNU-CPR policy brief. By generating realistic but false content at scale, AI makes it easier to spread misleading narratives quickly. Additionally, algorithms that prioritize engagement over accuracy can inadvertently promote disinformation, leading to its wider circulation and acceptance. Disinformation is one of the many ways that AI is influencing deliberation, raising concerns among policymakers, politicians, developers of AI technologies and civil society.

AI as a tool for manipulation

AI’s potential as a manipulative tool to derail deliberation is diverse and increasingly complex. Until recently, disinformation spread rapidly on social media, primarily through AI-driven recommendation systems and ranking algorithms, enabling false and harmful content to circulate more quickly than other types of information. Recently, however, advancements in the accuracy and availability of large language models – known as LLMs – and other generative AI tools have introduced even more concerning ways for disinformation to spread.

Four key risks identified in the brief include inventing false violence (like intentionally mislabeling the location of a video portraying a violent incident to fuel conflict), governments spreading false information for political purposes, false claims about peacekeeping and humanitarian interventions and the use of generative AI tools, like OpenAI’s ChatGPT, to engineer disinformation.

Entirely fabricated yet hyper-realistic, this content undermines trust in media and the authenticity of information, particularly in politics and social discourse.

One particularly concerning use of AI in manipulation is the generation of Deepfakes; synthetic media that uses AI to convincingly alter or generate audio and visual content – making it appear as if someone said or did something they did not. Deepfakes pose significant risks, including the spread of misinformation, reputation damage and potential misuse in blackmail or harassment. Entirely fabricated yet hyper-realistic, this content undermines trust in media and the authenticity of information, particularly in politics and social discourse.

AI as a tool for strategic decision-making in the military

Leveraging the capabilities of AI to analyse and synthesize copious amounts of data, recognize patterns, simulate outcomes based on different choices for scenario analysis or make deliberation inclusive through translations and real-time transcriptions are all complex and controversial processes.

AI can analyse vast amounts of intelligence data to assist military leaders in making strategic decisions, offering both benefits and challenges. While it can enhance efficiency and improve response times, potentially saving lives and reducing collateral damage, it also raises concerns about the risk of lowering the threshold for using force, the lack of accountability in automated decisions and potential biases in AI systems (including the data collected and analysed being biased and unrepresentative).

Human judgement based on these results is therefore at risk of being underinformed and unreliable, raising several critical questions:

  • When AI is used as a deliberative tool, but the strategic decisions are made by humans, who is accountable for the actions?
  • In an adverse situation, does the human get to blame technology for providing capricious or erroneous data or does the human get the blame because the decision was ultimately made by a human?
  • If a human is to blame, are the developers who created the technology, the organizations that deployed it or the individuals who relied on its recommendations accountable?
Virtual Meeting of High-level Advisory Body on Artificial Intelligence
Secretary-General António Guterres (centre right) and Deputy Secretary-General Amina Mohammed (centre left) attend a virtual meeting of the High-level Advisory Body on Artificial Intelligence. UN Photo/Eskinder Debebe

At the United Nations, AI governance is being prioritized through initiatives like the GDC, which proposes the establishment of a multidisciplinary Independent International Scientific Panel on AI. Additionally, the Governing AI for Humanity Report of the UN Secretary-General’s High-Level Advisory Body on AI outlines seven recommendations for governing AI and points towards current gaps such as lack of representation and coordination.

However, gaps remain, especially concerning accountability in AI-assisted deliberations. Effective governance of AI as a deliberative tool would require the establishment of accountability mechanisms or accountability assessment tools and regulatory frameworks for adopting agentic AI, a type of AI that can perform tasks and make decisions on its own and can replace human agency in deliberation.

AI Decision Audit trails, for example, require AI systems to generate transparent and traceable logs of their decision-making processes, which can be regularly reviewed by independent auditors. Another mechanism is AI Impact Assessments, similar to environmental or social impact assessments, that require organizations to conduct thorough evaluations of potential risks, biases and societal impacts before deploying agentic AI systems. Other gaps in governance of deliberative AI include the lack of structured interdisciplinary collaboration, public knowledge and transparency.

Ultimately, the future of deliberation in an AI-enhanced world will depend on our ability to balance the benefits of these technologies with the need for ethical standards and accountability mechanisms. Key questions about trust, accountability and the risks of disinformation highlight the complexities we face.

As we move forward, the implementation of the GDC by the international community, and other similar efforts, will be key to ensuring that AI serves to enhance deliberation rather than undermine trust in information. These initiatives will help secure a future where AI serves as a supportive tool for human judgment, rather than a replacement for it, ensuring that deliberative processes remain robust, inclusive and equitable.

Suggested citation: Polekar Nidhi., "Artificial intelligence and deliberation: an argument with machines," UNU-CPR (blog), 2024-11-19, 2024, https://unu.edu/cpr/blog-post/artificial-intelligence-and-deliberation-argument-machines.