On October 30, 2023, the United States published an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
Sweeping and thorough, it represents an important milestone in the governance of Artificial Intelligence (AI), especially given the predominance of US-based companies in the AI space. There have not been many efforts globally with this level of ambition. In April 2021, the European Commission published the draft EU AI Act, which has dominated regulatory conversations for the last 2.5 years by presenting a clear, risk-based framework. China has also been publishing AI regulations since 2022, progressively publishing specific guidance for algorithms, deep synthesis, and generative AI.
As a reminder, the draft EU AI Act categorized AI by application (rather than by functionality) – into low risk, limited risk, high risk, and unacceptable risk. Of particular interest to the UN has been high-risk applications, which included many uses of AI for sustainable development, peacekeeping and peacebuilding, and humanitarian operations. Once the Act passes, all AI deployments that fall into the high-risk category (for example, biometrics, education, migration, and judicial system) would need a pre-deployment certification obtained from an independent government body that would ascertain that risks (discrimination, privacy, and so on) had been appropriately analysed and would be mitigated and monitored. The EU AI Act has not passed yet and is still being negotiated.
The US Executive Order bypassed an extensive public negotiation process, although it was certainly discussed extensively with private companies, government agencies, and AI experts. It has also primarily focused on further developing voluntary commitments for AI governance, which US-based companies signed on to earlier this year. A Fact Sheet has been produced by the White House, and the consulting firm EY has also produced a detailed overview.
Due to the potential impact of the US EO on current efforts targeting global AI governance, this article outlines preliminary takeaways, and potential consequences for global multilateral AI governance initiatives.
Global Coordination and the Pillars of the UN
The UN aims to take a leadership role in global AI governance by facilitating global coordination between Member States, supporting a harmonization of protections for all peoples, as well as the promotion of global goals, values, and rights. AI has been shown to have an impact on all key pillars of the UN, namely human rights, peace and security, and sustainable development.
The EO consolidates the role of the US in the global coordination of AI regulation in a section on strengthening American leadership abroad (Section 11). This includes:
- Explicit mention of engagement with US partners and allies in multilateral organisations;
- Leading efforts both on a strong international framework and global technical standards, which would involve voluntary commitments;
- Addressing the impacts and risks of AI globally, including through the publication of an AI in Global Development Playbook, which would be led by the National Institute of Standards and Technology (NIST); and a Global AI Research Agenda which would monitor international impacts, including human rights, environmental, labour, and security impacts.
Several provisions for human rights and civil rights protections are included in the EO. Specific reference to human rights is included as one of the outputs of the Global Development Playbook which will incorporate the NIST AI Risk Management framework with other AI governance values to be used in “contexts beyond United States borders.”
There is considerable discussion of equity and civil rights, which are reaffirmed in relation to AI. This includes a commitment to ensure that government agencies and the Assistant Attorney General for Civil Rights implement protections against the violation of civil rights, particularly in relation to protection from discrimination and protection of privacy. The use of AI in the judicial system is specifically noted as an area of concern, with a 365-day deadline for the Attorney General to submit a report to the President on the use of AI in the criminal justice system (this is also one of the high-risk uses of AI noted in the draft EU AI Act).
In October 2022, the US Government also released a Blueprint for an AI Bill of Rights, which is one of three foundational documents for this EO, the second being the NIST AI Risk Management Framework, and the third the list of voluntary commitments which major American AI companies signed on to in July 2023. The Blueprint explicitly refers to protection from risks to civil rights such as “freedom of speech, voting, protections from discrimination, excessive punishment, unlawful surveillance, and violations of privacy and other freedoms in both public and private sector contexts.”
Security protection is a key consideration in the EO, especially with relation to the protection of critical infrastructure, cybersecurity, and weapons proliferation. For example, the EO highlights how dual-risk foundation models could be used to lower the barrier of entry to weapons development. These will be managed primarily by the Department of Homeland Security and the Department of Defense. However, the Secretary of the Treasury is also mandated to issue a report on best practices for financial institutions to manage AI-specific cybersecurity risks. Although addressing security concerns of AI can have an impact on peace, the EO does not address issues of AI risks to global peace as the UN understands it.
In relation to sustainable development, two categories of Sustainable Development Goals are brought into focus: climate change and the environment; and the labour market and the economy.
Climate and the environment: Primarily, this involves considerable investment in AI for the climate, with specific mention of AI to increase efficient energy consumption and distribution, and AI to mitigate climate change risks.
Labour and the economy: This is framed in terms of worker protections, including monitoring of impacts on the labour market, finding solutions to risks such as worker displacement, job quality, worker surveillance, and diversity in the AI workforce. There is also a section on the AI economy aiming to “promote competition” which involves a series of actions related to ensuring inclusive participation in AI design, development, financing, and leadership. These include support for small businesses and regional innovation, for example.
There is no reference to the effect of AI on women specifically in the EO, although women’s rights are assumed to be included in a section covering the groups that would be impacted by AI discrimination. There is, however, reference to the role of the Gender Policy Council in attracting AI talent into the US government, thus addressing the issue of gender diversity in AI development and policymaking.
Brussels/Washington Effect
Even before the EU Draft AI Act came into law, the European Commission’s draft legislation in 2021 had an effect on regulatory efforts in other jurisdictions. Countries with trade relations to the EU want to ensure compliance with EU regulations to facilitate trade and secure work visas for their workers. Anticipating EU-friendly legislation locally provides an even playing ground for local industry. The EU also tends to be an early mover in technology regulation, often providing, as it did with the GDPR, a general framework for those regulating afterwards.
The US is in a similar position. It has immense regulatory influence in this sector that extends beyond US borders. While the US EO refers mostly to impacts on American citizens, it will have an important effect on the activities of large AI companies which deploy their technologies internationally.
Here are several ways the EO will impact the global regulatory landscape:
Scope: The EO covers a lot of ground. It aims to cover current thinking on AI risks and opportunities, boosting economic development while promoting the deployment of safe AI, both for individuals and for the country as a whole. In this sense, it adopts both a risk-based and values-based approach.
Defining AI: The regulatory space has struggled with achieving a clear definition of AI. Policy documents often open with very broad definitions (for example, anything that uses data to predict, including statistical models) that then need to be backtracked. Recent policy documents seem to have converged, however, on adopting, or adapting, the 2019 OECD definition, and the EO is no exception. The definition used in the EO reads: “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”
A focus on impact rather than application: The EO differs from the EU AI Act in that it doesn’t categorize AI applications in terms of their innate riskiness. Nevertheless, high-risk uses of AI are implied by the development of an AI Council, which will include members of many key government entities, such as the Department of Education, the Secretary of Labor, and so on. Each government entity is responsible for monitoring and addressing both the risks and opportunities of AI applications in their sector and the impact of AI on their sector.
Cross-cutting/decentralized approach: The EO outlines areas of national priority, then distributes responsibility for leadership amongst relevant government entities. This diminishes the case for a dedicated Department of AI and instead considers AI as a cross-cutting issue addressed by sector. It should be added that many of the tasks appearing in the EO are concurrently assigned to several US agencies, which will require significant collaboration.
Defining timelines: Very clear and tight timelines are provided for many of the actions in the EO. The closest deadline is 90 days (for specific actions by companies, for example requiring companies to report on the deployment of safe and reliable AI), and the latest deadline is 365 days (for analyses of discrimination and other risks, such as risks of AI in law enforcement and the judicial system). This means that within a year, by 30 October 2024, most of the tasks in the EO should have been at the very least initiated.
This EO is likely to have an important impact on ongoing AI governance efforts at all levels, including regional and ‘minilateral’ efforts. Perhaps the most important impact is the commitment to encourage international partners and allies to adopt the same eight voluntary commitments that American AI companies have signed up to. However, this EO goes far beyond voluntary commitments. It sets up a structure of collaboration and monitoring between US government entities, as well as a mechanism for the US to play a strong role in the global governance of AI.
The EO answers many critical questions on the way in which the US will internally govern AI and its expectations of US-based companies abroad. The US has also since signed the Bletchley Declaration, which was the outcome of the AI Safety Summit in the UK, along with 27 other countries. The Declaration reaffirmed the importance of addressing AI risks at both national and international levels. Over the next few months, we will observe not only the implementation of the EO nationally, but also its effect on other national, regional and multilateral efforts, such as the UN’s AI Advisory Body, which is due to produce a draft report in December.
Eduardo Albrecht, Senior Research Fellow at UNU-CPR, contributed to this blog.
Suggested citation: Fournier-Tombs Eleonore ., "US Executive Order on AI: Takeaways for Global AI Governance," UNU-CPR (blog), 2023-11-10, 2023, https://unu.edu/cpr/blog-post/us-executive-order-ai-takeaways-global-ai-governance.