Artificial Intelligence (AI) is reshaping our world — revolutionizing industries, enhancing decision-making and offering solutions to some of humanity’s most pressing challenges. Yet, as AI advances, its governance remains a complex and evolving issue. While ethical considerations, societal impacts and regulatory frameworks have been widely debated, one fundamental challenge remains overlooked: the algorithmic problem in AI governance.
AI algorithms are at the heart of decision-making in machine learning systems. However, they are designed, trained and optimized using subjective metrics, deterministic models and probabilistic reasoning, each introducing unique governance challenges. The choices embedded in these algorithms influence technical performance, fairness, transparency and accountability. Addressing the algorithmic problem is critical in ensuring that AI systems are aligned with societal values and operate ethically, responsibly and inclusively.
The subjectivity of AI design
AI is often perceived as an objective and data-driven technology, yet its foundations are deeply subjective. Human biases influence the selection of training data, the choice of algorithms and the trade-offs between accuracy, efficiency and interpretability. Developers prioritize specific objectives — maximizing precision, minimizing error or optimizing for computational speed — each of which shapes AI systems’ outcomes. Often made without a governance lens, these decisions have real-world consequences: they can reinforce existing biases, amplify systemic inequalities or create opaque decision-making systems that lack accountability.
For instance, performance metrics like Mean Squared Error (MSE) are widely used to optimize AI models, yet they fail to account for ethical considerations such as fairness and inclusivity. MSE minimizes overall prediction error but can disproportionately penalize rare events, making AI systems less effective in critical areas such as anomaly detection, disaster prediction and health-care diagnostics. Similarly, Maximum Likelihood Estimation (MLE), a cornerstone of statistical AI, prioritizes the most probable outcomes while neglecting rare or underrepresented data points — potentially exacerbating biases against marginalized communities.
If AI governance is to be effective, it must address these algorithmic biases at their root. This requires interdisciplinary collaboration between policymakers, computer scientists, ethicists and social scientists to ensure that AI models are technically efficient and socially responsible.
The challenge of truth and accuracy in AI
A central dilemma in AI governance is the distinction between discrete truth and continuous accuracy. Traditional rule-based systems operate within binary frameworks — classifying inputs as true or false. In contrast, machine learning models, particularly those employing deep learning, operate within probabilistic spaces, where predictions are based on degrees of confidence rather than absolute certainty. This fundamental difference complicates governance.
Should AI systems be designed to prioritize discrete truth, ensuring precise and deterministic outcomes? Or should they operate within probabilistic frameworks that better reflect real-world uncertainty but introduce ambiguity? The governance challenge lies in determining acceptable accuracy, fairness, and risk thresholds.
This is particularly important in high-stakes health-care and criminal justice applications, where even minor algorithmic errors can have profound consequences. If an AI-powered medical diagnostic tool predicts a disease with 95% confidence, is that accuracy sufficient for clinical decision-making? If an AI-driven sentencing algorithm estimates the likelihood of reoffending at 70%, should that prediction influence judicial rulings? These questions highlight the need for governance frameworks integrating statistical confidence measures with ethical oversight to ensure AI decisions remain accountable.
Data centralization vs. distribution in AI governance
Another significant governance challenge stems from how AI systems are trained. The reliance on centralized, curated datasets — often dominated by sources like Wikipedia and significant Western-centric databases — raises concerns about representational bias and the exclusion of diverse perspectives. While centralized data improves efficiency, it risks amplifying dominant narratives and marginalizing alternative viewpoints.
Distributed data systems, which leverage diverse and decentralized sources, offer a potential solution. However, they introduce their governance challenges, including data inconsistency, privacy risks, and the need for robust validation mechanisms. Striking the right balance between centralized efficiency and distributed inclusivity is critical for ensuring that AI systems remain accurate and representative of global perspectives.
Federated learning, a technique that allows AI models to be trained across decentralized datasets without directly accessing raw data, presents a promising governance solution. AI developers can reduce biases while maintaining privacy and security by adopting such approaches. However, the successful implementation of federated learning requires clear governance policies to ensure equitable access to training data and prevent the monopolization of AI capabilities by a handful of dominant players.
Transparency, accountability and explainability in AI
Trust in AI hinges on transparency, yet many of today’s most potent models function as opaque “black boxes.” Deep learning algorithms, for instance, generate highly accurate results but often do so without providing clear explanations for their decisions. This lack of explainability raises critical governance concerns, particularly in regulated industries such as finance, health care and law enforcement.
Comparative frameworks, such as fuzzy logic versus deep learning, illustrate the trade-off between transparency and accuracy. Fuzzy logic models, which mimic human reasoning with rule-based decision structures, offer greater explainability but lower predictive power. Deep learning models, by contrast, excel in performance but lack interpretability.
Governance strategies must, therefore, mandate transparency mechanisms such as Explainable AI (XAI) and algorithmic auditing. These approaches provide stakeholders insights into AI decision-making processes, enabling greater accountability and fostering public trust. Policymakers must also implement regulatory standards that ensure AI models remain auditable, particularly in contexts where algorithmic decisions impact fundamental rights and freedoms.
The environmental and ethical cost of computational efficiency
As AI models grow in complexity, so too do their computational demands. Training state-of-the-art machine learning models requires vast energy, contributing to significant environmental costs. The push for computational efficiency, often achieved through parallel processing, raises ethical concerns about equitable access to AI resources.
The disparity between nations and institutions with access to advanced computing power and those without is widening, creating a technological divide that risks deepening global inequalities. AI governance must address this imbalance by promoting sustainable AI practices, investing in energy-efficient algorithms and ensuring that AI benefits are equitably distributed. Regulatory incentives for “green AI” and collaborative initiatives to democratize access to high-performance computing are essential steps toward a more inclusive AI ecosystem.
Rethinking AI governance for a low-precision technology
Despite its reputation for precision, AI is fundamentally a low-precision technology. Its reliance on approximation, statistical inference and probabilistic reasoning introduces inherent uncertainties. Recognizing this reality is crucial for developing governance frameworks prioritizing resilience, adaptability and ethical safeguards.
Rather than viewing AI’s low precision as a limitation, it should be reframed as an opportunity for critical governance innovations. By incorporating mechanisms such as Bayesian risk quantification, dynamic regulatory oversight and interdisciplinary ethical review boards, AI governance can become more responsive to emerging challenges while ensuring that AI systems remain accountable and aligned with societal values.
Toward a responsible AI future
The algorithmic problem in AI governance is not just a technical issue but a societal imperative. As AI continues, governance frameworks must evolve alongside it, integrating technical expertise with ethical foresight and regulatory innovation.
Addressing the subjectivity of AI design, balancing truth and accuracy, mitigating algorithmic biases, ensuring data representativeness, enhancing transparency and promoting sustainable computational practices are all essential steps toward responsible AI governance. By fostering interdisciplinary collaboration and embedding ethical considerations at every stage of AI development, we can create systems that are not only technologically advanced but also just, inclusive, and aligned with the greater good.
The future of AI is not just about algorithms — it is about the values we encode within them. Ensuring AI serves humanity requires a governance approach that is dynamic, adaptable and forward-thinking, just as the technology itself is.
Suggested citation: Marwala Tshilidzi. "The Algorithmic Problem in Artificial Intelligence Governance," United Nations University, UNU Centre, 2025-01-23, https://unu.edu/article/algorithmic-problem-artificial-intelligence-governance.