News

Tackling Social Bias against the Poor: A Dataset and Taxonomy on Aporophobia

New Study Explores How Social Media Reflects Bias Against People Living in Poverty.

A recent study led by Dr. Georgina Curto, Senior Researcher and Team Lead at the United Nations University Institute in Macau, in collaboration with researchers from the National Research Council Canada and the University of Ottawa, investigates how aporophobia against people living in poverty is expressed and discussed across social media platforms.

Published as part of the Findings of the Association for Computational Linguistics: NAACL 2025, the paper "Tackling Social Bias against the Poor: A Dataset and Taxonomy on Aporophobia" introduces a structured approach to identify, analyze, and measure this form of bias through language, leveraging natural language processing (NLP) techniques and interdisciplinary insights from social science, ethics, and AI.

Key Contributions of the Study:

1. The first taxonomy on aporophobia: 

A framework to define and classify societal bias against people living in poverty.

2. Annotation guidelines:

Instructions to identify aporophobia in social media texts.

3. DRAX Dataset:

An open dataset of 1,800+ English tweets, categorized manually as aporophobia, non-aporophobia, or reports of aporophobia.

4. Innovative data collection:

A novel, topic modeling-based approach that enables the categorization of raw data with consideration of context and subtle details.

5. Automated detection evaluation: 

An assessment of using large language models to automatically detect aporophobia.

This work offers a foundation to better understand how poverty-related bias is communicated online and how it intersects with other forms of discrimination,” said Dr. Curto. “It aims to support researchers, civil society, and policymakers in building more equitable digital environments.
Tackling Social Bias against the Poor: A Dataset and Taxonomy on Aporophobia

Bias against people living in poverty affects more than just perception, it can influence access to services, perpetuate structural inequality, and undermine efforts toward achieving the Sustainable Development Goals (SDGs), particularly SDG 1: No Poverty and SDG 10: Reduced Inequalities.

This research contributes to a growing field of work that seeks to combine AI for social good with critical social analysis. By better understanding how digital discourse perpetuates social stigma, policymakers, researchers, and civil society can work together to promote more inclusive narratives and responsive public policies.

READ THE FULL PAPER