Article

Why AI Governance Must Be Built on the Mathematics of Learning

If fairness in AI is probabilistic, then Africa’s role is to redefine probabilities by designing systems that learn from and serve its own people.

The International Conference on Theory and Practice of Electronic Governance (ICEGOV) is a global platform that unites leaders from government, academia, industry and international organizations to explore the role of digital innovation in strengthening governance. ICEGOV promotes dialogue on technology, policy and sustainable development. The 2025 event, held in Abuja, Nigeria from 4–7 November, was co-chaired by me and Dr. Bosun Tijani, Minister of Communications, Innovation, and Digital Economy of Nigeria, and organized by the United Nations University and Nigeria’s National Information Technology Development Agency, under the Federal Ministry of Communications.

At this conference, I delivered a keynote speech on the mathematics of artificial intelligence (AI) governance, emphasizing that the future of digital policy must be based not only on ethics but also on the scientific facts that define AI’s capabilities and limits.

As AI becomes the invisible hand of modern governance, determining who gets a loan, a job or parole, calls for “fair”, “transparent” and “accountable” AI have never been louder. Yet, while policymakers and ethicists speak in ideals, algorithms speak in probabilities. Between the high rhetoric of global AI principles and the cold mathematics of machine learning lies a dangerous gap — one that risks turning good intentions into harmful outcomes.

Between the high rhetoric of global AI principles and the cold mathematics of machine learning lies a dangerous gap — one that risks turning good intentions into harmful outcomes.

AI governance today often assumes that bias, error or opacity can be eliminated. However, the mathematics of learning reveals a more difficult truth: every algorithm operates under unavoidable trade-offs. The bias-variance trade-off, for example, shows that reducing one type of error tends to increase another. The Probably Approximately Correct (PAC) learning framework illustrates that AI inherently involves probabilistic models that are only “probably” correct, within a specific margin of error. The No Free Lunch theorem also proves that there is no universally superior AI algorithm; every model only succeeds within the context of its data and assumptions. Ignoring these limitations leads to policies that are not only impractical but also ethically inconsistent.

Real-world failures illustrate this point. The COMPAS algorithm used in US courts faced criticism for racial bias. Mathematically, no model can satisfy all fairness criteria when base rates vary across groups. These are not merely coding mistakes; they are expected outcomes of using complex models without understanding their theoretical limits.

What this means for policymakers is clear: effective AI governance cannot rely solely on ethics. It must be based on the science of algorithms. Regulation should shift from aspirational goals (“eliminate bias”) to risk-based realism. For example, requiring algorithmic impact assessments (AIAs) that document model complexity, data representativeness and fairness trade-offs can turn accountability from slogans into effective systems. Similarly, tying model oversight to complexity measures ensures that more powerful and riskier models undergo stricter scrutiny.

AI is not magic; it is mathematics, and mathematics has limits.

This approach aligns with a broader shift in global governance, exemplified by frameworks like the EU AI Act and the NIST AI Risk Management Framework, which increasingly recognize uncertainty as inherent to AI systems. However, for these frameworks to be effective, they must incorporate theoretical diagnostics from computational learning theory, treating generalization, sample complexity, and trade-offs as governance variables rather than mere technical details.

AI is not magic; it is mathematics, and mathematics has limits. A regulator who ignores these boundaries risks setting impossible standards that hinder innovation or, worse, allow harm to persist under the false guise of fairness. The future of AI governance must therefore start where the algorithms do, and this should be within the logic of learning. Only by anchoring policy in these algorithmic principles can we create systems that are not perfect, but accountable; not omniscient, but trustworthy; and not idealized, but real.

If fairness in AI is probabilistic, then Africa’s role is to redefine those probabilities by designing systems that learn from and serve its own people.

For Africa, the lessons from this intersection of mathematics and governance are both urgent and hopeful. The continent stands at a critical juncture as it is rich in data, talent, and ambition, yet vulnerable to becoming a passive consumer of AI systems crafted elsewhere. To ensure sovereignty in the age of algorithms, Africa must develop not only regulatory frameworks but also intellectual infrastructure, nurturing expertise in computational learning theory, data ethics, and algorithmic auditing across universities, public institutions and regional bodies.

Policymakers should demand transparency not as a luxury but as a requirement of digital partnerships. Investing in open science and indigenous data ecosystems can help prevent the importation of foreign bias while enabling models trained on African realities. Most importantly, Africa must view AI governance not as a limitation but as an opportunity to lead. This should include creating frameworks that are context-aware, socially rooted and globally impactful. If fairness in AI is probabilistic, then Africa’s role is to redefine those probabilities by designing systems that learn from and serve its own people.

Suggested citation: Marwala Tshilidzi. "Why AI Governance Must Be Built on the Mathematics of Learning," United Nations University, UNU Centre, 2025-11-17, https://unu.edu/article/why-ai-governance-must-be-built-mathematics-learning.