Blog Post

On the Unsustainability of ChatGPT: Impact of Large Language Models on the Sustainable Development Goals

A blog on the talk by UNU Macau's Head of Research, Dr. Serge Stinckwich, the first session of the new UNU Generative AI series of webinars.

Written by Xia Fan, with the contribution of Dr. Serge Stinckwich

The United Nations University (UNU) launched the Generative AI Series of webinars focusing on generative Artificial Intelligence, which has been gaining a lot of attention. Positive voices consider it the leading force for the current round of tech revolution, which will cultivate human society into a new era with enormous productivity - one that no one before has ever seen. Some, in the meantime, are taking a more critical perspective on these tools.

UNU Macau, one of the 13 UNU institutes, is focusing its research on digital technology and the Sustainable Development Goals (SDGs). We therefore are hosting this webinar series, which will be held monthly from September 2023 until the beginning of 2024.

The first session of the UNU Generative AI webinar series was presented by Dr. Serge Stinckwich, Head of Research at UNU Macau. Dr. Stinckwich is a computer scientist, with expertise in Modelling of Complex Systems, Social Simulation and the impact of Artificial Intelligence on the Sustainable Development Goals (SDGs). His presentation taps “On the Unsustainability of ChatGPT: Impact of Large Language Models on the Sustainable Development Goals”.

ChatGPT is for sure a wonderful tool but also probably an unsustainable tool,for multiple reasons.
Dr. Serge Stinckwich

GPT stands for Generative Pretrained Transformer. ChatGPT is the conversational AI chatbot, which to its core is a software that mimics human conversation by using semi-supervised learning and reinforcement learning. It made the news explosively as a new generation of conversational AI chatbots introduced by OpenAI in November 2022. 

Looking back into history to trace the development of chatbots, Eliza, introduced in the 1960s by Joseph Weizenbaum, from MIT, was the starting point. J. Weizenbaum wanted a method to explore communication between humans and machines. Eliza mimicked a psychotherapist of the Rogerian school, in which the therapist often picked up the patient's own words, but used only predefined responses to the users.

Language models are Eliza's modern version that could interact with humans by prompting. Since then, a language model has always been about probable words, not facts. Chatbots, together with content creation, language translation, text summarization, code generation are the main applications of language models.

The concept of prompt is essential to interact with model. Prompt is a question that you ask the system. The system will then try to predict the words following these words within the prompt. The language model gains ability of such prediction by training from large corpus from the Internet.

For example, if we put in there “students open” as our prompt, what would be the following words? Basically, the students can open their books, their laptops, and lots of different things. Language models can use these probabilities to generate the words and further on the full text.

The second important concept to decode ChatGPT is “Large”. Large Language Model is a transformer-based neural network introduced by Google in 2017 in their famous paper: Attention is all you need. Systems used to perform this task tend to learn from an extensive corpus. The sophistication and performance of a model can be judged by how many parameters it has. A model’s parameters are the number of factors it considers when generating output.

ChatGPT’s corpus size is around 45 TB of text, and the number of parameters is 175 billion. The huge size of this system is important as it has lots of consequences on sustainability issues.

The mission of UNU is to contribute, through collaborative research and education, to efforts to resolve the pressing global problems of human survival, development, and welfare that are the concern of the United Nations, its Peoples, and Member States. AI has grown into a pressing global topic, with practitioners and policymakers urging to form comprehensive assessment of the impact of generative AI on the society.

As the bridge crosses different fields, groups, and geographical parties, with the UN 2030 agenda approaching, UNU Macau aims to lead the discussion and considers focusing on the risks more crucial to understand this technology’s implications on the argument of the Sustainable Development Goals. Disinformation, environmental impacts, and transparency risks are the major ones.

Disinformation is the first risk that we would like to bring up. ChatGPT's ability to generate plausible-sounding misinformation, disinformation and hate speech is seen to have potentially serious effects on the well-being of communities, and on democracy.

The word “hallucination” is used to describe the disinformation produced by language models, and it was first used in the research paper Challenges in Data-to-Document Generation (by Sam Wiseman, Stuart M. Shieber and Alexander M. Rush). There has been over 40 incidents related to ChatGPT reported on the AI, Algorithmic, and Automation Incidents and Controversies initiative (AIAAIC). Industry practitioners vow to solve this within two years, while opponents represented by Emily Bender (Director of Washington’s Computational Linguistic Laboratory) argue that this isn’t fixable, for the fundamental reason that the technology we have now is built on autocompletion, not factuality.

Secondly, environmental impacts by such large language models have been broadly overlooked: Large Language Models can have a huge impact on greenhouse gas emission as they are very big systems running on high-performance hardware like GPUs and large clusters computing infrastructure. The first paper identifying the societal and environmental risks associated to LLMs was On the dangers of stochastic parrots: Can language models be too big? 🦜 (Bender, Gebru, McMillan-Major, and Shmitchel, 2020).

The problem is that it is very difficult to assess gas emissions due to a lack of transparency from tech companies who own the LLMs. The information we master reveals part of the risks:

 

BERT (Bidirectional Encoder Representations from Transformers)

• Google, 2019

• Parameters: 300 million

• Training on a GPU is roughly equivalent to a trans-American flight

BLOOM

• Hugging Face, 2022

• Training: 25 tons of carbon dioxide emissions (30 flights between London and New York)

• But less than equivalent LLMs because uses nuclear energy

GPT-3

• Training: 500 tons of carbon dioxide emissions (600 flights)

 

There are already lots of ongoing research to reduce the size of emission, as well as the economic cost of such systems. Recently, Open AI disclosed that 700,000 USD were spent on running ChatGPT per day. It is not even a sustainable model from a business perspective.

Lastly, transparency of LLMs concerns the international community: we don’t know all the details about the ChatGPT algorithm (black box), where the data come from and how it was curated.

To better assess this risk, we need to be reminded that nowadays AI systems predominantly rely on humans for training. Data used for such training comes from texts produced by humans on the Internet. Lots of documents on the Internet contain toxicity and bias. TIME journalists recently discovered that, in order to get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, called Sama, beginning in November 2021.

Employees reading and labeling between 150 and 250 passages of text (between 100 to 1000 words each) per nine-hour shift have been reported to experience mental health issues. The task’s traumatic nature eventually led Sama to cancel all its work for OpenAI eight months earlier than planned.

According to all these complexities, AI, as some researchers have argued, is becoming less of an engineering science but more an empirical science, which is based on observation and experiments: nowadays, we build complex AI systems and later try to understand the consequences and how we can use them safely. Open AI released ChatGPT and gained 100 million users in two months. No prediction would be realistic for such rapid development of any technology. We must stick to close observation, strict analysis and open discussion of such new reality. Only with the spirit of transparency and collaboration, could the international society find new scientific tools and methods that would apply a better sustainable development strategy.

 

Suggested citation: Xia Fan, Stinckwich Serge., "On the Unsustainability of ChatGPT: Impact of Large Language Models on the Sustainable Development Goals," UNU Macau (blog), 2023-09-08, 2023, https://unu.edu/macau/blog-post/unsustainability-chatgpt-impact-large-language-models-sustainable-development-goals.

Related content

News

Female Empowerment in STEM Education Discussed at UNU Macau AI Conference

A panel discussion explored how to empower women in STEM education, especially in light of rapidly advancing AI technologies.

29 Apr 2024

Brief

Framework for the Governance of Artificial Intelligence

This technology brief offers a concise summary of considerations for the appropriate oversight of artificial intelligence technologies.

19 Apr 2024

Professor Giuseppe Loianno

Conversation Series

Robotics and AI: The Future Today

-