Artificial intelligence (AI) is advancing faster than the systems created to regulate it. Around the world, governments, technology companies and international organizations are struggling with a key question: how can we ensure AI innovation benefits society while avoiding harm?
The challenge lies not only in design regulations but also in aligning incentives. Companies developing AI systems often have more information about their models than regulators do. They understand the data used, the potential risks and the limitations of their systems. Regulators, on the other hand, must depend on external audits, disclosures and technical evaluations to evaluate safety and compliance. This information asymmetry or imbalance creates a typical governance problem: how can regulators motivate companies to disclose risks rather than hide them?
One unexpectedly effective solution comes from a straightforward rule for dividing a cake. Economists refer to this as the cutter–chooser rule. The concept is simple: if two people need to split a cake, one cuts it into pieces, and the other chooses first. Because the cutter knows the chooser will pick the piece they prefer, the cutter has a strong reason to divide the cake fairly. Self-interest leads to fairness.
In AI governance, the goal is clear: safe, transparent and trustworthy AI systems. But reaching that goal requires institutions that motivate developers to honestly disclose risks and design systems responsibly.
This principle belongs to a field called mechanism design, often described as “reverse game theory”. Instead of predicting behavior, mechanism design works backward from a desired outcome, creating rules that motivate people to act in ways that fulfill it.
In AI governance, the goal is clear: safe, transparent and trustworthy AI systems. But reaching that goal requires institutions that motivate developers to honestly disclose risks and design systems responsibly. The cutter-chooser principle provides an elegant way to understand this challenge.
Imagine a regulatory framework where AI developers serve as the cutters. They are required to disclose information about their models: training data sources, potential biases, safety limitations and operational risks. They must also suggest how their systems should be evaluated or audited.
The regulator acts as the selector. Regulators can independently choose evaluation tests, auditing procedures, or deployment constraints based on the developer’s disclosures. Since developers know that regulators will select the strictest or most revealing evaluation among the available options, they are motivated to provide accurate information from the outset. In other words, the system incentivizes honesty.
This approach can be put into practice in several ways. Developers might need to provide detailed risk profiles of their AI models along with recommended evaluation frameworks. Regulators would then choose the most suitable or strictest assessment methods. If a developer underestimates risk, it increases the chance that regulators will enforce tighter oversight.
A model trained in one country can be used worldwide in seconds. However, regulatory frameworks remain fragmented across jurisdictions.
Similarly, developers could suggest safety thresholds or operational limits for their systems, knowing that regulators have the authority to choose the most conservative option. Once again, honesty remains the rational strategy.
The strength of this approach is that it aligns incentives rather than relying solely on enforcement. Rather than forcing compliance through constant monitoring, it designs decision-making to make honest disclosure the developer’s best choice.
This concept is not new. Variations of it already exist in other fields. Financial regulation, for instance, depends on disclosure requirements and stress tests that motivate banks to disclose risks before crises happen. Environmental policy often uses market-based tools that align corporate incentives with environmental protection.
AI governance can draw lessons from these examples. The need for such mechanisms is especially urgent because AI operates internationally. A model trained in one country can be used worldwide in seconds. However, regulatory frameworks remain fragmented across jurisdictions. Without incentives for proper governance, companies might just move development to places with less oversight, a practice known as regulatory arbitrage.
Of course, no single rule can address the complexities of AI governance.
Mechanism design provides a way to solve this problem without hindering innovation. By establishing governance structures that promote transparency and safety for developers, regulators can encourage responsible AI development while still advancing technology.
Of course, no single rule can address the complexities of AI governance. But the cutter-chooser principle demonstrates an important lesson: sometimes the most effective solutions are also the simplest. Fairness in AI governance won’t happen on its own. It needs to be intentionally built.
As AI becomes more integrated into our economies, institutions and daily lives, we need governance frameworks that promote cooperation instead of secrecy. By aligning incentives between developers and regulators, we can move closer to a system in which technological progress and public trust support each other.
Another example of this incentive alignment is a regulatory sandbox, which provides a safe environment where rapid technological innovation and public safety can coexist through collaboration rather than conflict. By reducing entry barriers, regulators prompt developers to be open about how their algorithms function, information often kept as trade secrets. In exchange, developers receive limited legal protection and direct engagement with policymakers, helping to create regulations that are technically feasible rather than just bureaucratic hurdles. This method shifts away from the traditional “build first, ask for forgiveness later” mentality towards designing technology with compliance in mind from the outset, potentially accelerating the development of trustworthy AI. Typically, the sandbox operates through a cycle: developers submit applications with specific use cases; the technology is tested with real users in a controlled environment; regulators monitor for issues like bias or security vulnerabilities; and finally, the project leaves the sandbox to scale in the market, with regulatory insights informing future laws.
Ultimately, governing AI might not demand reinventing the wheel. It may just require figuring out how to share the cake fairly.
Suggested citation: Tshilidzi Marwala. "Cutting the Cake of AI Governance: A Simple Rule for a Complex World," United Nations University, UNU Centre, 2026-03-16, https://unu.edu/article/cutting-cake-ai-governance-simple-rule-complex-world.