Blog Post

The role of AI Agents in humanitarian action: balancing ethics and innovation

The humanitarian sector must proactively examine how AI agents could transform – or harm – efforts to assist vulnerable populations.

In an era of constrained resources and growing humanitarian needs, AI agents present both unprecedented opportunities and profound risks for how the international community responds to crises. A new research initiative at UNU-CPR adopts a "design thinking" approach to examine one emerging application: Agent-generated digital personas that can simulate conversations with refugees and conflict actors, potentially revolutionizing how humanitarian organizations collect data, train staff and understand community needs.

The research is not proposing these technologies as ready solutions. Rather, it initiates a critical conversation about what responsible adoption would require – and the guardrails needed to prevent potential harms to the very populations humanitarians seek to serve.

Testing AI personas in crisis contexts

The study created two AI agent generated personas: "Ask Amina", simulating a refugee living in Chad's Metche camp, and "Ask Abdalla", representing a Rapid Support Forces combatant in eastern Sudan. Both personas combined digital avatars with AI to create an "anthropologist agent” system designed to autonomously curate an appropriately tailored knowledge base that, in turn, allows the avatars to respond authentically to questions about their experiences, needs and perspectives.

The results revealed intriguing possibilities. When tested against real survey data, Amina correctly answered 80 per cent of questions spanning nutrition, refugee assistance and conflict topics – questions not included in her training data. Simulated negotiations with Abdalla demonstrated tactical responses consistent with known behavioural patterns of armed group leaders, potentially offering valuable training opportunities for diplomatic personnel.

These findings suggest AI personas could address several persistent challenges in humanitarian work. Traditional data collection methods are often time-intensive, resource-heavy and sometimes impossible in high-risk environments. Language barriers and interpreter bias can also distort information gathering, while respondents may provide misleading answers when they perceive aid workers as gatekeepers to essential resources.

Voices from the field: feedback raises critical concerns

However, at a recent workshop used to introduce the agents, feedback from humanitarian practitioners, conflict mediators and development experts revealed profound concerns that must shape any future development of these technologies. Participants questioned fundamental assumptions about representation, authenticity and power dynamics inherent in creating AI versions of vulnerable populations.

The feedback highlighted a crucial paradox: while AI personas might make information more accessible, they could simultaneously distance decision-makers from the lived realities of crisis-affected populations.

"Why would we want to present refugees as AI creations when there are millions of refugees who can tell their stories as real human beings?" asked one participant, capturing a sentiment that echoed throughout the discussion. Others worried about the risk of "reinforcing biases if interactions with refugees are done with AI agents rather than actual humans", and whether these systems might inadvertently "sanitize or downplay real human suffering".

The feedback highlighted a crucial paradox: while AI personas might make information more accessible, they could simultaneously distance decision-makers from the lived realities of crisis-affected populations. Participants noted that refugees "are very capable of speaking for themselves in real life", raising questions about whether technological solutions might further marginalize voices that humanitarian action should actually amplify.

The urgency of proactive ethics

This research comes at a critical juncture. AI development is accelerating rapidly, often outpacing ethical frameworks and regulatory oversight. If the humanitarian sector waits to engage with these technologies until they are fully developed by commercial actors, it may lose the opportunity to shape them according to humanitarian principles and human rights standards.

Yet this potential comes with corresponding risks; any deployment must centre meaningful community engagement, ensuring that affected populations have genuine authority over how they are represented.

The imperative for early engagement became clear during workshop discussions about potential applications beyond needs assessment. For example, participants identified promising uses for bias testing among humanitarian workers, noting that AI personas could help staff examine their own assumptions before engaging with real communities. Yet this potential comes with corresponding risks; any deployment must centre meaningful community engagement, ensuring that affected populations have genuine authority over how they are represented.

Resource pressures and ethical imperatives

The appeal of AI personas is undeniable in a resource-constrained environment. The United Nations faces persistent funding shortfalls while humanitarian needs continue expanding. Technologies that promise rapid, cost-effective data collection and training solutions naturally attract attention from organizations pressed to do more with less.

WFP Donates Food
Humanitarian aid operations, such as WFP food distributions, face mounting pressure to meet the needs of crisis-affected communities, prompting urgent calls to explore ethical AI tools that support, but do not replace, direct human engagement. UN Photo/Logan Abassi

However, feedback has made clear that cost considerations cannot override ethical obligations, and warns against rushing to adopt AI tools without first establishing robust governance frameworks. The research itself acknowledges this tension, proposing that "whatever investment is made in these tools, equal investments are needed in regulation and governance". Otherwise, AI personas risk creating an illusion of community engagement while actually substituting artificial representations for authentic participation.

Essential guardrails for responsible development

The research proposes several critical safeguards that must guide any future development of AI agent generated personas in humanitarian contexts. These include transparency requirements ensuring users understand how AI systems generate responses, and clear documentation of data sources and analytical processes.

The goal is not to create substitutes for real refugees or conflict actors, but to develop tools that might enhance understanding and prepare humanitarian workers for more effective direct engagement.

More fundamentally, the study recommends establishing formal mechanisms connecting AI systems with representatives of the communities they portray. This goes beyond consultation to meaningful governance authority, giving affected populations rights to approve, modify or reject how they are represented in digital form.

Building bridges, not barriers

Perhaps most importantly, the research clarifies that AI agent-generated personas should complement rather than replace human engagement. The goal is not to create substitutes for real refugees or conflict actors, but to develop tools that might enhance understanding and prepare humanitarian workers for more effective direct engagement.

This distinction matters profoundly for maintaining the human dignity and agency that must remain central to humanitarian action. AI agent-generated personas might help organizations rapidly assess needs during disease outbreaks when traditional surveys are impossible, or enable negotiators to practice approaches before high-stakes mediations. But they cannot replace the fundamental requirement for humanitarian action to be grounded in authentic relationships with affected communities.

Policy implications and next steps

The research reveals an urgent need for proactive policy development. Key priorities include:

  • Investment strategies that balance innovation with ethics, requiring equal resources for governance mechanisms alongside technological development.
  • Community engagement protocols that give affected populations meaningful authority over digital representations and ensure technological solutions serve community-identified needs.
  • Capacity-building initiatives that prepare humanitarian organizations to deploy AI agent generated personas responsibly while maintaining focus on direct human engagement.

The conversation initiated by this research is just beginning, but it arrives at a crucial moment. As AI capabilities advance and resource pressures intensify, the humanitarian sector faces choices that will shape how it serves vulnerable populations for years to come. By engaging proactively with both the promise and perils of AI agent generated personas, policymakers and practitioners can work to ensure that technological innovation strengthens rather than undermines the fundamental humanitarian commitment to human dignity and protection.

 

The full working paper "Does the United Nations need agents? Testing the role of AI agent generated personas in humanitarian action" is available here.

The AI personas discussed in this research can be experienced at askamina.ai.

Some of the same themes in this blog are reflected in a recent book published by Eduardo Albrecht, Political Automation.

Suggested citation: Albrecht Eduardo., "The role of AI Agents in humanitarian action: balancing ethics and innovation," UNU-CPR (blog), 2025-06-06, 2025, https://unu.edu/cpr/blog-post/role-ai-agents-humanitarian-action-balancing-ethics-and-innovation.

Related content