Blog Post

Safety in AI Systems

The concept of safety cements the need to ensure trustworthiness, preservation of human rights and the potential to promote sustainability and peace.

Date Published
21 Jul 2021
Authors
JeongHyun Lee Attlee Gamundani Serge Stinckwich

The UN Secretary-General's Roadmap for Digital Cooperation provides a guideline for discussing Artificial Intelligence (AI) issues around trustworthiness, human rights, safety, sustainability, and promotion of peace for key stakeholders around the world. The concept of safety cements the need to ensure trustworthiness and preservation of human rights on one hand and on the other, the potential to bring about sustainability and promotion of peace through AI technologies.  At UNU Macau, we are committed to coordinating efforts towards ensuring safe technological adoptions to address policy-relevant needs.

Safety in AI

Safety indicates AI systems that are reliable, accurate, robust, and resilient for their functionality, data privacy and security in the system. It also indicates being protected from possible risks, dangers, and threats (Amodei et al., 2016).  The global community has been exposed to the misuse of digital technologies through cyberattacks and disinformation. Without the robustness of the technical system, safety is threatened by the disruption and abuse of technologies. For example, during the COVID-19 pandemic, the International Criminal Police Organization reports a rise in global ransomware attacks. Over the past few years, there have been significant UN-wide efforts to address the rising threats caused by digital technologies and artificial intelligence, aiming to protect reliable, real-time and secure systems. Any potential AI failure or exploitation of the AI system can cause harm in both the digital and physical world, and, eventually, undermine public trust and the Sustainable Development Goals (Leslie, 2019; Yampolskiy & Spellchecker, 2016).

UNU Macau on safety

At UNU Macau, our Smart Citizen Cyber Resilience team is focused on cyber safety and aims to strengthen society's ability to respond to cyber-risks by enhancing the resilience of citizens and civil society stakeholders in Macau and around the world.

The key objectives of this project are:

  1. To include more citizen-centric perspectives in cybersecurity strategies of different countries in the Asia-Pacific region;

  2. To develop a framework to understand and assess the cyber-resilience posture of civil society stakeholders;

  3. To design a cyber-resilience management intervention.

This project has been conducted in partnership with Caritas and supported by FDCT (The Science and Technology Development Fund) funding in Macau.

In the future, malicious actors could use more aggressive AI systems to initiate cyber attacks on a larger scale. Therefore, it is important for various stakeholders to investigate and conduct further research on AI safety.

 


This article is the third in the blog series on responsible AI. The blog series was inspired by the call for artificial intelligence that is trustworthy, human-rights based, safe and sustainable and promotes peace in the UN Secretary General’s Roadmap for Digital Cooperation. 

 

Suggested citation: Lee JeongHyun, Attlee Gamundani and Stinckwich Serge., "Safety in AI Systems," UNU Macau (blog), 2021-07-21, https://unu.edu/macau/blog-post/safety-ai-systems.

Related content