Article

The Dynamic Link Between Human Behaviour and AI Governance

Governance frameworks must respond to new behavioural science insights into human-AI interaction and the societal repercussions of AI breakthroughs.

Artificial intelligence (AI) has made significant strides in all areas of our lives. AI can now monitor patients, safeguard buildings, improve manufacturing processes and write poetry. These are good uses of AI. There are, however, bad uses and characteristics of AI.

For example, AI can discriminate against digital minorities such as people of African descent or ethnic minorities. It can be weaponized with devastating consequences for peace and security. Given that AI is both good and bad, what should be done? We must ensure that AI is used to maximize the good and minimize bad uses. Various strategies have been deployed to maximize the good uses of AI. Furthermore, multiple techniques have been deployed to reduce the bad uses of AI. For example, ensuring that data for training AI systems is representative can minimize algorithmic bias and discrimination.

However, fixing defective algorithms is much easier than improving a flawed human being who intends to use AI negatively. For example, it is far more challenging to change a human being that discriminates against ethnic minorities than to fix an AI machine that discriminates against ethnic minorities. To improve the AI machine, we must apply the best practices of handling data, designing algorithms, and using AI technology, which requires good human behaviour.

In AI, discussions of its development, deployment and regulation frequently focus on technological and ethical concerns. However, this discussion’s significant and underexplored aspect parallels human behaviour and AI governance. These two generate a complex confluence that considerably impacts how AI technologies are conceived and deployed and how they are regulated. As AI permeates every aspect of our lives, understanding and leveraging this confluence of AI and human behaviour is advantageous and is also required for developing AI systems that are both effective and fair.

Humans are at the heart of AI development, and their decisions, prejudices, and actions affect AI systems. Behavioural science, which studies human behaviour via psychological, cognitive and emotional lenses, provides essential insights into how developers and designers should approach the building of AI. It highlights the cognitive biases that can influence algorithm design and the ethical blind spots resulting from prioritizing technical efficiency over social consequences.

Incorporating human behaviour into AI governance requires identifying and addressing these human issues. It entails developing frameworks enabling developers to reflect on the potential biases they bring to the design process and cultivating a culture that values ethical considerations and societal well-being over technical innovation.

End-user behaviour must also be considered when governing AI. Behaviour science provides a lens through which to investigate how people engage with AI systems, such as responding to recommendations, making decisions based on AI-generated data, and building trust in AI technologies. These findings are critical for developing AI systems that are not only user-friendly but also promote beneficial behavioural outcomes, such as improving decision-making processes and preventing the reinforcement of negative biases.

Furthermore, behavioral science can inform public education campaigns on AI, enabling users to be more critical and educated in their interactions with AI systems. Governance frameworks based on behavioural science can raise an understanding of AI’s possible biases and limits, allowing people to interact ethically and effectively with AI.

The link between human behavioural and AI governance is dynamic. As AI technologies advance, so will the societal norms and behaviours that shape and are shaped by them. Governance frameworks must be adaptive and ready to respond to new insights from behavioural science into human-AI interaction and the societal repercussions of AI breakthroughs.

This adaptability necessitates a participatory and inclusive governance strategy involving AI developers, users, behavioural scientists, ethicists, policymakers and other stakeholders in ongoing discourse. This strategy can help to keep governance frameworks current and thriving in the face of rapid technological development and shifting human behaviours.

Education is critical to guaranteeing the ethical governance of AI since it provides developers and consumers with the knowledge and skills needed to navigate the complicated moral environment of AI technology. Technologists can gain a thorough awareness of the ethical consequences of their work by participating in comprehensive educational programs that integrate ethics, data privacy, and societal effects into the core curriculum of computer science and AI courses.

Education can help people understand AI technologies, promoting informed and critical engagement with AI systems. By incorporating ethical considerations into all stakeholders’ education in creating and using AI we can foster a culture of responsibility and accountability. This strategy teaches individuals to foresee and address ethical difficulties and promotes the advancement of AI systems that emphasize human values and social well-being. As we set the road for the future of AI, let us not underestimate the value of understanding human behaviour in guiding us toward more responsible and beneficial AI governance.

This article was first published by Forbes Africa. Read the original article on the Forbes Africa website. 

Suggested citation: Marwala Tshilidzi. "The Dynamic Link Between Human Behaviour and AI Governance," United Nations University, UNU Centre, 2024-05-31, https://unu.edu/article/dynamic-link-between-human-behaviour-and-ai-governance.

Related content