Article

The Dual Faces of Algorithmic Bias — Avoidable and Unavoidable Discrimination

By ensuring the data fed into algorithms is as diverse as possible, we can harness the potential of algorithms to reflect humanity's best qualities.

With the rise of artificial intelligence (AI), algorithms have become the invisible threads that weave through our daily lives, impacting decisions ranging from the ordinary to the extraordinary.

However, as AI’s popularity develops, so does the acknowledgement of its flaws, including biases and discriminations. These biases, some controllable and others seemingly uncontrollable, jeopardize the integrity of algorithmic choices and represent more significant societal fractures.

We need to explore the complex environment of algorithmic biases, advocating for a nuanced understanding and a proactive approach to resolving these digital representations of social, political, economic and technological faults.

Avoidable algorithmic biases are the result of oversight or neglect. They emerge when the data feeding the algorithms is unrepresentative or when the creators of these systems unintentionally incorporate their unconscious biases into the code.

What are the consequences? Stereotypes are perpetuated, and social injustices are reinforced.

The un-explainability of complex algorithms, mainly those driven by deep learning, adds more difficulty. Deep learning is a type of neural network, a branch of AI that is trained to represent through multiple layers relationships between input variables, e.g. an x-ray image of a person’s lung, to an output, i.e. whether that person has lung cancer or not. The “black box” nature of these deep learning systems makes it difficult to identify biases, making the quest for justice challenging to achieve using these tools.

Addressing avoidable biases is feasible, but requires diligence and a commitment to diversity and transparency. It starts with ensuring the data fed into algorithms is as diverse and representative as possible. It also includes creating diversity among the teams that develop these algorithms, ensuring that diverse perspectives are considered and inherent biases are identified.

However, these solutions are imperfect, and residual biases and discriminations remain. Therefore, the realistic goal to deal with algorithmic bias is to minimize it.

Confronting the bias

Consider language recognition technology for low-resourced languages, and here, we use the Ju/’Hoansi San language as an illustration. Potential algorithmic bias against the Ju/’Hoansi San, an indigenous ethnic group of southern Africa that numbers between 50,000 and 75,000 people, exemplifies the more significant issue of how AI systems might unavoidably discriminate against minority populations.

Due to population size, the Ju/’Hoansi San’s distinctive language is inherently underrepresented in the digital archives. This results in AI language systems that are ill-equipped to recognize the Ju/’Hoansi San language, misinterpreting the nuances of their click languages.

To mitigate this, transfer learning from related but more extensive languages, such as IsiXhosa, can aid in developing more inclusive AI systems despite the limited availability of massive datasets.

The political economy of data representation for AI training is a complex subject that intersects with power dynamics, economic interests, and social structures.

Sadly, it is essential to note that despite decreasing algorithmic discrimination, this strategy does not eliminate it. Therefore, the crux of the matter is data representativity.

Data representativity depends on the political economy. The political economy of data representation for AI training is a complex subject that intersects with power dynamics, economic interests, and social structures. Accordingly, the data that AI systems consume is more than just a collection of neutral bits and bytes; it reflects the sociological, political, and economic conditions from which it arises.

These sociological, political, and economic conditions are challenging to fix, especially in the short term, but more so in the life cycle of algorithm development. Therefore, we continue to develop algorithms under these imperfect conditions.

Entities with more resources and influence can frequently acquire, manipulate and curate massive datasets, thereby moulding AI models trained on this data to represent their opinions and interests. This dynamic might result in a representativity gap, in which marginalised communities are either underrepresented or misrepresented in AI training datasets.

As a result, the emerging AI systems may reinforce existing biases, exacerbate structural inequities, and fail to meet the different requirements of the global community.

Diverse datasets

Addressing this disparity necessitates a concerted effort to democratize data gathering and curation, ensuring that AI systems are trained on datasets that are not only large, but diverse and representative of the complex tapestry of human experiences.

This undertaking is a technological, political and economic problem necessitating a collaborative strategy in which policymakers, engineers, and communities work together to design a more fair and inclusive AI ecosystem.

Addressing this disparity is critical for developing AI systems that are truly global and inclusive. Collaborative activities in data collection and curation, including community engagement, must ensure the data is linguistically accurate and culturally representative.

Furthermore, it advocates for novel AI training methods, such as transfer learning or unsupervised learning techniques, to maximise learning from minimal data. Bridging this gap is more than just a technical issue; it is a commitment to linguistic diversity and cultural inclusivity, ensuring that the benefits of AI are available to everyone, regardless of language.

Optimizing for one sort of fairness may unintentionally lead to biases toward another, illustrating the paradoxical nature of our quest for equity.

While some biases are avoidable, others are unavoidable and ingrained in the fabric of our cultural and technical frameworks. These unavoidable biases stem from the intricacies of social phenomena, the diverse nature of justice, and the ever-changing fabric of society standards.

Fairness, a concept as old as humanity, is fundamentally subjective. What one person considers fair may not be fair to another. In their pursuit of justice, algorithms frequently come into competing definitions. Optimizing for one sort of fairness may unintentionally lead to biases toward another, illustrating the paradoxical nature of our quest for equity.

Furthermore, societal norms are constantly changing. If social attitudes and understandings shift, an algorithm that exemplifies fairness today may become a relic of bias tomorrow. This dynamic terrain transforms the quest for fairness into a journey rather than a destination, a continuous evolution rather than a single achievement.

Social and technological paradigm shift

Addressing inherent biases necessitates a paradigm shift in our approach to algorithmic fairness. It represents a move from a static, one-time solution to a dynamic, continuous process. It entails ongoing monitoring and upgrading of algorithms to reflect changing societal standards and perceptions of fairness.

It also calls for a deeper interaction with stakeholders, particularly those from underprivileged communities. Their perspectives and experiences are crucial in comprehending the varied nature of fairness and bias, elevating algorithmic development from a technical exercise to a societal conversation.

Finally, it advocates for solid ethical frameworks and governance mechanisms to guide the development and deployment of algorithms by social values and standards. These frameworks are more than just guidelines; they serve as guardrails to ensure that our pursuit of technical innovation does not outstrip our commitment to equity and justice.

As we stand at the crossroads of technology and society, algorithmic biases and discrimination are both a challenge and an opportunity. It is a challenge to the integrity of our technological accomplishments and a chance to reflect, correct, and progress.

By tackling avoidable discrimination through care and openness and navigating unavoidable biases through continual growth and inclusive discourse, we can harness the potential of algorithms to reflect humanity’s best qualities, not its flaws.

The route is complicated, but the final result — a society where technology serves as a bridge rather than a barrier to equity — is unquestionably worthwhile.

This article was first published by Daily Maverick. Read the original article on the Daily Maverick website.

Suggested citation: Marwala Tshilidzi. "The Dual Faces of Algorithmic Bias — Avoidable and Unavoidable Discrimination," United Nations University, UNU Centre, 2024-01-31, https://unu.edu/article/dual-faces-algorithmic-bias-avoidable-and-unavoidable-discrimination.