A new UNU-CPR policy brief explores generative AI and its impacts on disinformation globally, with a particular focus on Sub-Saharan Africa. Disinformation, false or misleading information aimed at deceiving and causing harm, has surged on social media in the past two decades, fueling political conflicts across the region.
The phenomenon has taken on an entirely new, more aggressive form since the introduction of generative AI, specifically the public launch of OpenAI’s ChatGBT in December 2022. Not only does AI allow for false, dangerous content to be spread, but it allows for it to be spread more rapidly than other information, finding loopholes to reach even those without Internet access. This policy brief outlines these risks, how they manifest in the media and political spheres globally, and the roles of foreign actors and local governments in these dangerous contexts.
The brief puts forward several key recommendations that, when implemented together, have the potential to tackle AI-powered disinformation and reduce its ability to promote conflict in Sub-Saharan Africa:
- Disinformation-related efforts should work within the multilateral system across global, regional, and national initiatives that aim to govern AI and digital spaces.
- Social media platforms should prioritize efforts to address disinformation.
- Government organizations, civil society, and the private sector should commit to increasing the funds available for fact-checking initiatives governed by journalists and social media platforms.
- National governments and other bodies in Sub-Saharan Africa should develop digital literacy programmes to help people better identify disinformation.