Article

Can Deep Learning Predict War, and Should It?

The future of conflict prediction relies on combining technical ability, institutional governance and ethical responsibility.

For centuries, war has been seen as the failure of human foresight. Diplomats respond, militaries gear up and scholars analyze, but accurate predictions of conflict beforehand are rare. Now, that may be changing. Progress in deep learning neural networks is moving conflict prediction from guesswork to more precise, data-driven forecasts. The focus is no longer on whether machines can predict conflict, but on how this predictive ability should be utilized and regulated.

The roots of this transformation can be traced back to earlier uses of neural networks in conflict modeling. Even basic architectures like multi-layer perceptrons (MLPs) and radial basis function (RBF) neural networks have shown that non-linear, data-driven methods can surpass traditional statistical models. These systems effectively capture complex interactions among variables, such as economic conditions, alliances and geographic factors, that linear regression cannot capture. Empirical evidence shows prediction accuracies exceeding 75%, with MLP models outperforming other methods due to their ability to model interconnected relationships among variables. This interconnectedness makes it extremely difficult to interpret deep learning networks.

The main change is in scale. Today’s deep learning systems can handle large, diverse datasets such as satellite images showing troop movements, financial flows revealing economic stress, climate indicators forecasting resource shortages and social media signals capturing public sentiment. These models do more than just analyse variables; they develop internal representations of conflict dynamics that are hidden, constantly changing and highly non-linear. This shift moves us from traditional, theory-based modeling to large-scale pattern recognition.

This shift has significant implications.

First, deep learning boosts predictive accuracy by capturing the complexity of the world rather than simplifying it. Since conflict usually results from multiple interacting forces, political, economic, environmental and social, neural networks, inspired by the human brain, are well-equipped to model these interactions. Their capacity to approximate complex functions allows them to identify relationships that are difficult to analyze manually. Essentially, they serve as “experts” trained on historical data, capable of detecting early warning signs that human analysts might overlook.

However, increasing predictive accuracy introduces a paradox. As predictions become more precise, their impact grows.

Second, deep learning, if appropriately formulated, can offer a probabilistic perspective on conflict. Instead of binary predictions of war or peace, outputs are expressed as probabilities, assessments of risk amid uncertainty. Methods such as Bayesian neural networks and evidence frameworks enable these probabilities to account for uncertainty in data and model parameters. This approach is vital in policy settings, where decisions rely on risk assessment rather than certainty.

However, increasing predictive accuracy introduces a paradox. As predictions become more precise, their impact grows. An 80% accurate conflict prediction model can guide preventive diplomacy, while a 95% accurate model might influence military strategies, financial markets and geopolitical alliances. In this context, prediction equates to power.

This raises three fundamental challenges.

The first issue is causality versus correlation. While deep learning models excel at detecting patterns, they do not inherently explain why those patterns occur. For example, a model might forecast conflict based on rising commodity prices, falling gross domestic product and increased online polarization. However, without additional structure, it cannot distinguish cause from coincidence. Policymakers might then act on correlations that could change over time. Therefore, incorporating causal inference, counterfactual analysis and experimental design into machine learning processes is not optional but crucial.

The second key aspect is interpretability and trust. Neural networks are frequently labeled as “black boxes” and, in conflict prediction, this lack of transparency can be risky. Decisions regarding war and peace should not be made by systems that cannot clarify their reasoning. To keep humans in control, it is essential to incorporate advances in explainable AI, such as feature attribution, model decomposition and surrogate models, into conflict prediction frameworks.

Without proper governance, predictive systems risk being weaponized, used not to prevent conflict but to anticipate and exploit it. 

The third aspect is governance. Who owns the models? Who manages the data? Who determines how predictions are applied? Without proper governance, predictive systems risk being weaponized, used not to prevent conflict but to anticipate and exploit it. When some actors have superior predictive capabilities, it creates an intelligence asymmetry that could destabilize global security. Therefore, governing AI in conflict prediction is more than just a technical concern; it is a geopolitical issue.

There is also a fundamental ethical issue: if we can forecast conflict, do we bear a responsibility to intervene? What if acting based on a prediction changes the actual outcome? This illustrates the classic problem of reflexivity: forecasts can influence the very systems they aim to predict. A warning about possible conflict might prompt diplomatic efforts, or, if misunderstood, could escalate the situation. Thus, conflict prediction is inherently an active process; it is an intervention.

Despite these obstacles, the benefits could be significant. Deep learning-driven early-warning systems might facilitate proactive diplomacy at an unprecedented level. Resources could be used more effectively, humanitarian emergencies predicted earlier, and conflicts defused before they escalate. In a world dominated by complex, interconnected threats, such capabilities are essential rather than optional.

However, technology alone is insufficient. The future of conflict prediction relies on combining three components: technical ability, institutional governance and ethical responsibility. While deep learning can predict where conflicts might arise, it cannot determine the appropriate actions to take.

That remains, fundamentally, a human decision.

Suggested citation: Tshilidzi Marwala. "Can Deep Learning Predict War, and Should It?," United Nations University, UNU Centre, 2026-04-20, https://unu.edu/article/can-deep-learning-predict-war-and-should-it.