Artificial intelligence (AI) is making governance smarter. It is also making it harder to question. This is the quiet paradox at the heart of the AI revolution. Herbert Simon, the Nobel laureate who gave us the concept of bounded rationality, argued that human decision-making is fundamentally limited, constrained by imperfect information and finite cognitive capacity. AI breaks those bounds. It processes data at scales no institution could match, models outcomes no committee could foresee, and generates decisions faster than any court could review. Rationality, in this new world, is no longer fixed. It expands with computational power, and this is what is called flexibly bounded rationality.
But something is lost in that leap. The reasoning embedded in deep learning models, reinforcement systems and multi-agent algorithms cannot be meaningfully translated into human terms. We get the answer. We rarely get the reasoning. This is what might be called rational opacity: the condition in which the most consequential decisions in public life are produced by processes that no one, not regulators, not judges, not the engineers who built them, can fully explain.
A new kind of power
The implications for governance run deeper than most policy debates acknowledge. Democratic accountability depends on decisions being explainable and contestable. The rule of law depends on reason-giving, on authority being exercised in ways that can be scrutinized and challenged. Rational opacity erodes both. When an AI system denies someone a loan, a welfare benefit or parole, and no one can explain why, that person has no meaningful avenue for appeal. Not because the system failed, but because it worked exactly as designed, in ways that resist articulation.
What makes this especially troubling is that flexibly bounded rationality, the expanded decision-making capacity AI enables, is not evenly distributed. Those who design and control these systems operate at a cognitive frontier that most people cannot access. This is not simply a matter of information asymmetry. It is an asymmetry in the capacity to understand, challenge and override decisions that shape millions of lives. Epistemic inequality, in other words, is becoming a new axis of power, and it is concentrating rapidly.
Not a new problem, but a deeper one
To be fair, opacity has always been a feature of governance, not a bug introduced by AI. Human institutions have long been shaped by bias, heuristic reasoning, and structural complexity that resists full transparency. AI does not create opacity; it replaces one form with another.
But the substitution matters. A flawed human judgment can be interrogated, appealed, and, with enough persistence, overturned. A biased algorithm embedded in a system that processes millions of decisions per day operates at a scale and speed that render conventional contestation largely illusory. The opacity is deeper, the decisions faster, and the mechanisms for redress far less developed than the systems they are meant to check.
Governing the process, not just the outcome
The standard regulatory instinct is to focus on outcomes: was the decision accurate? Was it fair? In a world of rational opacity, that is no longer sufficient. We need to ask how decisions are made, and embed oversight into the process itself, not merely evaluate its results.
This means shifting from outcome-based to process-based governance. It means technical audits that regulators are actually equipped to conduct, meaningful explanations for people affected by algorithmic decisions, and clear legal answers to a question most jurisdictions have yet to resolve: when an autonomous system gets it wrong, who is responsible?
There is also a structural argument for what might be called intelligence symmetry. If AI is powerful enough to make consequential decisions, it is powerful enough to scrutinize them. The same computational capacity that concentrates epistemic authority can, with deliberate policy, be used to distribute it.
Expanding access to analytical tools is not a peripheral concern; it is a precondition for any serious check on algorithmic power.
Legitimacy cannot be optimized
We are not simply choosing between efficient and inefficient governance. We are deciding what kind of authority we find legitimate. Rational opacity is not an accident; it is a structural feature of systems designed to push the limits of optimization. The very qualities that make AI valuable in governance are the ones that make it difficult to govern. That tension will not resolve itself.
The challenge ahead is therefore not merely regulatory. In a deeper sense, it is about governing rationality itself, ensuring that as these systems grow more capable, they remain accountable, interpretable and anchored in values that cannot be reduced to the so-called loss function, which quantifies AI’s imperfection.
A decision can be statistically optimal and still be unjust if no one can explain it, contest it, or answer for it. Legitimacy requires more than good outcomes. It requires that power remains answerable to the people it affects. Before we delegate more of public life to systems we cannot read, that is the standard we should insist on.
Suggested citation: Tshilidzi Marwala. "Rational Opacity: Governing Intelligence Beyond Understanding," United Nations University, UNU Centre, 2026-05-08, https://unu.edu/article/rational-opacity-governing-intelligence-beyond-understanding.