Artificial Intelligence (AI) is now firmly on the international agenda. In May, OECD member states endorsed the Principles on Artificial Intelligence, the first international agreement of its kind. France and Canada are expected to formally launch the International Panel on Artificial Intelligence (IPAI) at the G7 meeting in August. These efforts build on a flurry of national and regional initiatives. Despite this progress, the emerging global governance landscape for AI faces three challenges in ensuring non-governmental participation, achieving buy-in from key governments, and adapting to rapidly changing technology. However, there is another, promising yet often-overlooked tool of global governance: international standards-setting bodies can complement emerging AI governance efforts with strengths that address these three challenges.
International standards bodies have a long history of channeling non-governmental stakeholder participation into global impact. Standards bodies, including ISO, IEC, and IEEE, convene experts from academia, business, and government to develop standards by consensus processes. Past standards have ensured global interoperability in everything from computer hardware to quality management protocols. These standards can have a pervasive and profound impact in everyday life. Hundreds of thousands of companies from more than 170 countries have implemented environmental sustainability standards at considerable expense, and ISO and IEC have developed safety standards used in industries ranging from self-driving cars to nuclear energy. ISO standards have even been used by the United Nations to develop arms control guidelines. Standards-setting bodies and their standards could be similarly impactful for AI.
Today, international standardization efforts for AI have buy-in from key governments. The US Executive Order on AI prioritized standards development, and the US government is now developing a standards engagement strategy. In a notable contrast to other governance efforts to-date, China has also taken a seat at the table. The Chinese government published a standards strategy in 2018 that prioritized international standards development. That same year, the ISO/IEC Joint Technical Committee for Information Technology created a standards sub-committee for Artificial Intelligence (SC 42). China and the US both contested leadership of SC 42, resulting in an unusual compromise: the committee has a US secretariat, is chaired by a US employee of Huawei, and held its first meeting in Beijing. SC42 is now undertaking foundational research and standards development. The IEEE is also developing a series of standards that address, inter alia, transparency, algorithmic bias, and fail-safe design.
AI is rapidly evolving: it is a general purpose technology, the final forms and impacts of which are uncertain. Preemptive agreements on steam engines, electricity, or the internet would have struggled to shape their transformative impacts. The governance-by-principle and research synthesis approaches taken by the OECD and IPAI, respectively, sensibly acknowledge this. Standards can offer a governance tool, grounded in technical reality, that compliments these other efforts. For example, there is widespread agreement, from the EU, US, OECD, China, and beyond, that “trustworthy AI” is a priority; ongoing standards development in transparency, robustness, and safety can help deliver this focus into practice. Furthermore, the process of standards development and adoption can support trust, not simply for public consumers, but, significantly, among researchers, labs, and countries. Once developed, standards can enable a global market for AI systems and encourage their widespread and safe deployment.
For a technology that has surprised experts by its speed of breakthroughs, timelines for governance matter. Precautionary efforts to support means for governance to scale up are warranted. It took the OECD’s principles on data privacy nearly 40 years to get substantive teeth, only doing so following EU action in response to the Snowden revelations. In contrast, once the development of a standard is initiated, it will normally take no more than four years to be published; still other expedited procedures are available if they are needed. Standards can help to rapidly establish guardrails in international and market competition in AI development.
Despite their strengths, international standards bodies are an incomplete piece of the larger global governance puzzle. Standards are most commonly adopted voluntarily. This means that, absent enforcement, malicious actors could simply disregard their provisions. Furthermore, business prioritizes the development of standards that are needed to support market transactions. Yet, standards are warranted across the AI lifecycle to promote safety. Absent concerted effort from governments, concerned businesses, and civil society, these standards may be slow to emerge, if they do at all. There is yet another challenge demonstrated by ongoing trade tensions and challenges to technology firms like Huawei: increased politicization of standards bodies would complicate this mode of AI governance.
International standards have an important role to play in the global governance of AI. In a recent report, I explore this role and offer recommendations for private-sector actors to support globally legitimate, high-quality standards. Global governance scholars and practitioners should actively consider how standards can support the development and enforcement of further global governance efforts. In the past, standards have been incorporated into international treaties, supported the global diffusion of regulation, and informed government policies as well as procurement contracts. Today, we are early in the development of the technology and even earlier in its governance. Standards offer a first step to help erect guardrails in the international and market competition for AI. Let’s use them as the governance tool that they are.
Peter Cihon is a Research Affiliate at the Center for the Governance of AI at the Future of Humanity Institute at the University of Oxford. His research concerns global private regulation in the governance of AI, including international standards and multi-stakeholder approaches responsible development. He has worked on digital policy for government, civil society, and academia on three continents.
The opinions expressed in this article are solely those of the author and do not necessarily reflect those of the Centre for Policy Research, United Nations University, or its partners.
Suggested citation: "AI & Global Governance: Using International Standards as an Agile Tool for Governance," UNU-CPR (blog), 2019-07-08, https://unu.edu/cpr/blog-post/ai-global-governance-using-international-standards-agile-tool-governance.