Article

The Politics of Speed: Governing AI in an Age of Algorithmic Conflict

Machines can act more quickly than we anticipate; our responsibility is to ensure we govern them, especially when they outpace our judgment.

Artificial intelligence AI) is fundamentally transforming global conflict. Historically, research on militarized interstate disputes focused on identifying patterns, policymakers and scholars relied on statistical methods to examine periods of peace or sudden outbreaks of war. Early machine learning techniques, such as support vector machines (SVMs) and neural networks, have shown that conflict is seldom caused by a single, straightforward factor. Instead, it results from complex, nonlinear interactions among variables like democratic strength, alliance networks, economic ties and power imbalances.

While these models greatly enhanced predictive accuracy, they also revealed a constant tension. In high-stakes geopolitics, knowing the outcome is often less important than understanding the reasons behind it. Currently, this tension is reaching a breaking point due to a new factor: velocity.

The compression of the decision cycle

Today, AI systems operate in rapidly cycling processes of execution, training and deployment. We have shifted from updating strategic models annually to continuously retraining them as new data from satellites, social media and signals intelligence flows in. Decisions that required hours of human deliberation within the traditional observe, orient, decide, act (OODA) loop are now compressed into seconds, or even milliseconds, by adaptive algorithms.

This temporal collapse is inherently political. It shifts prediction into preemption. As the interval between a model’s “forecast” and the system’s “action” approaches zero, the usual opportunities for diplomacy and de-escalation disappear. Consequently, the governance challenge extends beyond AI accuracy to include the speed at which these systems function.

The performance-interpretability trade-off

The comparison between SVMs and neural networks historically acts as an important analogy for modern governance. In early conflict research, SVMs frequently produced more accurate predictions, but neural networks provided deeper insights into sensitivity analysis.

In warfare, where lives are at stake, a quick but unclear system is far more dangerous than a slightly slower, interpretable one.

In a rapid conflict setting, there’s a key trade-off. A highly precise but opaque (‘black box’) system can respond automatically without giving human commanders the context needed to judge whether the response is justified. In warfare, where lives are at stake, a quick but unclear system is far more dangerous than a slightly slower, interpretable one. Speed intensifies the nonlinear nature of international relations; minor shifts in alliances can prompt immediate, algorithm-driven counteractions, increasing the danger that “flash wars”, unexpected escalations caused by interacting algorithms, become a systemic risk.

The infrastructure of power: cloud, edge and embedded

The location where “thinking” occurs influences how power is exercised. The management of algorithmic conflicts is directly tied to the AI’s physical structure.

First, is cloud computing. Cloud computing serves as the centralized foundation of strategic AI, enabling extensive data integration and detailed auditing. In a cloud-based system, governance is maintained through centralized oversight and multiple layers of verification. However, reliance on the cloud introduces latency; in a world of fast-paced threats such as hypersonic missiles and cyberattacks, a three-second server handshake delay can be critical. Dependence on external providers also raises concerns about digital sovereignty.

Second, is edge computing. By deploying AI directly onto drones, sensors, or naval ships, countries gain the benefit of real-time decision-making. This rapid response can be tactically vital, providing “sensor-to-shooter” speeds that avoid centralized delays. Nonetheless, edge systems are challenging to monitor, as oversight becomes fragmented across distributed autonomous units, potentially leading to unintended escalation.

Diplomacy, law and democratic oversight are designed to proceed slowly to weigh consequences, build legitimacy and ensure deliberation.

Third, is embedded AI. This represents the most advanced level, with intelligence embedded directly into hardware such as cruise missiles or submarines, making the system fully autonomous. Embedded AI performs optimally in “contested environments” where communication is disrupted, but once deployed, these systems are nearly impossible to govern. Errors or biases in the system cannot be corrected mid-mission. While speed is maximized, flexibility and accountability are essentially absent.

These architectures establish a layered ecosystem in which conflicts can be triggered at the edge, handled in the cloud, and executed by embedded systems, all faster than a human diplomat can pick up a phone, and this is dangerous.

The feedback loop of algorithmic war

The greatest danger of high-speed AI lies in the generation of cascading feedback loops. Unlike traditional conflict models, where variables tend to be stable, in algorithmic conflict, the model itself becomes a variable.

If an AI estimates a 70% likelihood of an adversary’s aggression, it may initiate a defensive stance. The adversary’s AI detects this stance and interprets it as a warning of an imminent attack, then ‘confirms’ its prediction, leading to escalation. This process, called “algorithmic anticipation,” creates a scenario in which responses are based not on actual enemy actions but on predicted intentions. Because these systems update rapidly, even a small data error can spread through the network, reinforcing a false conflict narrative that may become a self-fulfilling prophecy.

Towards an ethic of deliberate friction

Governing this new landscape requires a paradigm shift. In addition to regulating AI’s actions, we should monitor its response speed. Adding structured validation steps in vital areas like diplomacy, strategic planning and nuclear command enhances safety. First, retraining guardrails limits how quickly models can update their understanding without human oversight. Second, architectural transparency mandates that embedded and edge systems keep auditable ‘black box’ records. Third, in key geopolitical scenarios, the human buffer implements “human-in-the-loop” or “human-on-the-loop” protocols to prevent automated escalation.

The velocity gap

We are seeing a widening gap between the rapid pace of advanced technological systems and the deliberate, slower pace of human institutions. Diplomacy, law and democratic oversight are designed to proceed slowly to weigh consequences, build legitimacy and ensure deliberation. By contrast, AI is built for speed and efficiency. If this disparity continues to widen, control over peace and war may shift from elected officials to autonomous systems. The crucial challenge of the 21st century is to prevent our technological progress from turning us into passive spectators of our own potential destruction. Machines can act more quickly than we anticipate; our responsibility is to ensure we govern them, especially when they outpace our judgment. While speed can be a powerful tool, true wisdom in conflict requires taking the necessary time and thereby safeguarding peace.

Suggested citation: Tshilidzi Marwala. "The Politics of Speed: Governing AI in an Age of Algorithmic Conflict," United Nations University, UNU Centre, 2026-04-27, https://unu.edu/article/politics-speed-governing-ai-age-algorithmic-conflict.

Related content

Event

Building Government AI Capacity: Insights from the City, National and Regional Perspectives

This is a Side Event during the 11th Multi-Stakeholder Forum on Science, Technology and Innovation for the SDG taking place in New York on 5 May 2026.

-