Article

Governing the Invisible: How Algorithms Are Quietly Driving Conflict

We are entering a time when conflict extends beyond battlefields to be embedded in data, models and optimization functions.

In the 20th century, we understood conflict through visible forces, armies, alliances and ideology. In the 21st century, conflict is increasingly shaped by something much less visible: algorithms.

\Today, artificial intelligence (AI) systems do more than predict conflict; they are starting to change the fundamental causal structures that make conflict more or less likely. This change requires a new approach to governing: one based not only on outcomes but also on causal understanding, knowing what truly causes conflict, not just what is linked to it.

At the heart of this transformation lies a key distinction between correlation and causation. Traditional statistical models find patterns, such as economic decline correlating with unrest or proximity leading to escalation. However, correlation alone does not tell us whether changing a variable will affect outcomes. That is where causal inference becomes crucial. Methods like structural causal models and Bayesian networks enable us to estimate what would happen if conditions were different, the domain of counterfactuals.

For example, does increasing economic interdependence between two states lessen the likelihood of conflict, or is interdependence merely a result of already peaceful relations? Without exploring such counterfactual questions, governance addresses symptoms rather than root causes.

Machine learning methods like automatic relevance determination (ARD) improve this analysis by highlighting which variables have the most predictive power such as economic interdependence, democratic institutions, power asymmetry and alliances. However, predictive importance is not the same as causal importance. A variable might be highly predictive but not actionable, or, worse still, misleading if interventions are applied incorrectly. This is why statistical relationship modeling must be paired with causal reasoning.

Conflict is rarely caused by a single factor. It results from complex interactions, such as low interdependence combined with power imbalances or weak institutions interacting with geographic proximity. These interaction effects can be examined using designed experiments and quasi-experimental methods, like natural experiments, instrumental variables or difference-in-differences techniques. These approaches enable us to go beyond mere observation and establish evidence of causal influence.

The rise of automated conflict drivers

As AI systems are used in diplomacy, intelligence, and finance, they start to influence the very variables they are meant to analyze. In doing so, they create what we might call endogenous risk loops, where prediction feeds back into reality.

A trading algorithm that destabilizes currency markets, a recommendation system that intensifies polarizing narratives, or a defense model that escalates perceived threats are not just predictive tools. They are interventions in the system itself, often without clear acknowledgment.

These are examples of automatic conflict-driving factors: algorithmic processes that, through optimization, unintentionally raise the chance of instability.

Consider economic interdependence. Long viewed as a stabilizing force, causal analysis often supports its role in reducing conflict. Yet, algorithmic systems that optimize supply chains or trade flows may inadvertently weaken interdependence. Without causal awareness, what seems efficient locally might increase systemic fragility globally.

Similarly, power asymmetry, another key driver, can be worsened by AI systems that disproportionately boost the capabilities of dominant actors. In causal terms, AI acts as a treatment that shifts the balance of power, potentially raising the likelihood of conflict under certain structural conditions.

From prediction to responsibility

The main problem is that AI systems are often seen as neutral observers. They are not. They encode assumptions, inherit historical biases, and optimize for goals that might not align with peace.

This creates a governance blind spot: we oversee decisions, but not the causal pathways that lead to them.

To address this, we need to incorporate machine-learning interpretability into governance to understand how models prioritize different variables and how changes in inputs influence outcomes. However, interpretability must go further; it must be linked to causal accountability. It is not enough to know what a model focuses on; we must also understand what happens when those variables are altered.

ARD offers a starting point by ranking variable importance. However, governance must go further by validating causality — testing if intervening on high-relevance variables reduces conflict risk. This involves incorporating experimental thinking into policy, designing interventions, testing them and iterating based on evidence.

This marks a shift from reactive governance to design-based governance, where systems are assessed not only on predictive accuracy but also on their causal effect on peace and conflict conditions.

A new governance agenda

To regulate automatic conflict-generating factors, four principles are vital. First, is causal transparency. AI systems must reveal not only which variables they focus on but also whether those relationships are causal or simply correlational. This involves incorporating causal inference frameworks into model development and auditing.

Second is counterfactual accountability. Systems should be evaluated through simulated interventions: what happens if economic interdependence rises, if inequality drops, or if institutional strength improves? Governance must be based on these “what-if” analyses.

Third is experimental governance. Policymaking should follow the logic of designing experiments; testing interventions in controlled or semi-controlled environments, learning from the results, and expanding what works. This approach is especially important in complex, non-linear systems where intuition often leads astray.

Fourth, align with peace-promoting mechanisms. AI systems should be designed to strengthen factors that causally reduce conflict, such as economic ties, institutional cooperation and democratic resilience rather than just optimizing for efficiency or short-term gains.

The stakes

We are entering a time when conflict is no longer fought only on battlefields; it is embedded in data, models, and optimization functions. The question is no longer whether AI can predict war but whether we can control the causal structures that make war more or less likely.

If we fail, we risk embedding instability into the systems that shape our world. If we succeed, we unlock something unprecedented — the ability to design systems that not only understand conflict but also actively prevent it.

Suggested citation: Tshilidzi Marwala. "Governing the Invisible: How Algorithms Are Quietly Driving Conflict," United Nations University, UNU Centre, 2026-04-13, https://unu.edu/article/governing-invisible-how-algorithms-are-quietly-driving-conflict.