SPEAKERS
Muruga Perumal RAMASWAMY University of Macau | Huixin ZHONG Xi’an Jiaotong-Liverpool University | Yaodong YANG Peking University Institute for Artificial Intelligence | Wayne WEI WANG Hong Kong University |
Li YI University of Macau |
DESCRIPTION
It was with the adoption of the UNESCO Recommendation on the Ethics of Artificial Intelligence in November 2021 that artificial intelligence (AI) was confirmed as a matter of global relevance. As concerns regarding the ethical implications and practical dangers of AI grew, it became clearer that non-binding recommendations or codes of conduct might not suffice to effectively address the risks posed by AI. Beginning with the initial presentation by the European Commission of the draft European Union Artificial Intelligence Act (AI Act), national and regional lawmakers and regulatory authorities began to draft or prepare binding laws and regulations—either specific or comprehensive—to govern AI.
By 2025, many sectoral or comprehensive laws regulating different aspects of AI were in development or enacted. However, the novel nature of AI—particularly its cross-cutting, cross-boundary, and cross-cultural characteristics, along with its multiple purposes—defies successful classification within existing legal and institutional categories. Consequently, AI warrants new and creative solutions to confront a wide range of risks and challenges that are also rapidly evolving.
In a rapidly changing and highly uncertain global landscape, the debate about AI regulation is largely dominated by the narrative of a ‘global race to regulate AI,’ rather than a constructive and broad global dialogue seeking to find solutions to a shared, global problem. Moreover, this narrative is also heavily influenced by the terminology and goals set by the industry developing these technologies and applications, a phenomenon often referred to as the “Silicon Valley Effect.” Overall, this dominant narrative results in a too-narrow focus that emphasizes technology rather than its impact on humans and life.
Additionally, it hampers the formulation of concrete but coherent actions to implement the goals outlined in the Pact for the Future, which includes a Global Digital Compact and a Declaration on Future Generations, adopted at the United Nations Summit of the Future held in September 2024.
Against this backdrop, the present panel seeks to contribute to the realization of these goals and other international AI policy initiatives by reframing the dominant narrative of global AI governance. The aim is to develop alternative and innovative governance frameworks. To accomplish this, the panel will first examine the cooperation among the BRICS countries (Brazil, Russia, China, India, and South Africa), whose membership has expanded since 2024 to include Egypt, Iran, the United Arab Emirates, Saudi Arabia, and Ethiopia—forming an important “dialogue and cooperation platform.”
Building on an interdisciplinary dialogue convened at the UN Science Summit, which applied Integration and Implementation Science (IIS) to bridge perspectives across law, ethics, human-computer interaction, and policy, the BRICS can serve as an example in the search for such novel governance frameworks or regulatory approaches.
In a second segment, the panel will explore AI alignment, which involves encoding human values and goals into large language models and ensuring that AI systems behave in accordance with human intentions and values. It will examine how China and other BRICS countries can develop AI governance strategies that integrate philosophy, ethics, and policy with AI research.
The third contribution will analyze China’s sovereign artificial intelligence (AI) regime, viewed as an iterative, institutionally embedded process of state-led yet liberalized technological development. It will assess how China’s model balances state coordination with market-driven innovation, creating a dynamic ecosystem responsive to domestic institutional logics and external geopolitical constraints, and consider whether it offers a distinctive pathway for AI governance for other BRICS countries and the world at large.
A fourth contribution will focus on the need to evaluate potential risks and adverse impacts resulting from AI use, and to develop regulatory responses that enable responsible engagement by public authorities. Specifically, it will review core values and strategies aimed at mitigating AI’s adverse effects—such as initiatives promoting non-discrimination, inclusiveness, and capacity building.
In light of these risks and responses, the session will critically assess emerging doctrinal proposals advocating measures to prevent non-human and artificial elements from undermining equitable engagement with AI.
Finally, the session will attempt to propose viable solutions to address the complex and converging challenges associated with AI and related technologies within the context of a highly fragmented legal and institutional international system. Through comparative legal analysis and case studies of AI strategies and regulations in the BRICS, the session will explore ways to enhance international cooperation in AI governance, aiming to establish a more consistent, coherent, and future-proof global governance framework.