MITB Banner

The Environmental Impact of LLMs

GPT-3 produced carbon emissions equivalent to 500 times the emission of that of a New York-San Francisco round trip flight.

Share

Listen to this story

With the adoption of LLMs, there’s a new concern gaining ground – the environmental impact of training these models. Believe it or not, the carbon footprints left behind from training large models run into hundreds of tonnes. As per the sixth edition of AI Index Report 2023 published by Stanford University, the carbon dioxide-equivalent emissions produced by GPT-3 stood at 502 tonnes in 2022, the highest when compared to similar-parameter trained models. 

However, the study hasn’t factored in the latest GPT-4 model, which would be even worse on this account. Notably, OpenAI has not revealed the size of its parameters to the public. Researchers have different criteria to calculate the carbon emissions by AI systems. This includes the number of parameters used for training the model, a data centre’s power usage effectiveness, and the grid carbon intensity. In its latest technical paper as well, OpenAI did not reveal anything about the environmental impact, carbon emissions or even the parameter size.

The AI Index Report compared four LLM models, where GPT-3 had the highest emission out of all the other models. It was even higher than Gopher, an open-source model trained on large 280B parameters. Multilingual language open model BLOOM, with equivalent parameters as GPT-3, produced 25 tonnes of carbon in 2022, which was 20 times lower than GPT-3. Meta’s open pre-trained language model OPT consumed the least power with 1/7th the carbon emissions produced by GPT-3. 

Source: AI Index Report 2023

The power usage effectiveness (PUE) in the above table is a metric to evaluate a data centre’s energy efficiency. It is calculated as a ratio of energy consumed by a data centre, including cooling and air conditioning, to the energy delivered to the computing equipment. The value is inversely proportional to the efficiency of the data centre. 

Below is a representation of carbon-emission estimates when compared with real-life examples such as cars, air travel, and human life (for a year). GPT-3 emitted almost 500 times that of a flying passenger in a New York to San Francisco round trip. 

Source: AI Index Report 2023

(The Stanford report calls out the challenges in directly comparing the carbon footprints of these models as the accounting methodologies for reporting carbon emissions are not standardised.)

AI for Reducing Energy?

Now, AI is being tested to combat the high levels of energy consumption in AI systems. While training LLM models will expend energy, there have been efforts to experiment on reinforcement learning for controlled commercial cooling systems. New reinforcement learning models like DeepMind’s BCOOLER (BVE-based Constrained Optimisation Learner with Ensemble Regularization), are working towards energy optimisation in data centres. 

DeepMind and Google have been conducting live experiments on two-real world facilities for reducing energy. The experiment showed energy savings of 9% and 13% at the two experiment sites. 

Energy savings resulted over time with BCOOLER experiment. 

Source: arxiv.org

Training on lesser GPU

Efforts are on to lessen the generous amounts of carbon footprints left behind by LLM models. Experiments on reducing the compute that powers these models have been considered. Recently, AI research students released FlexGen, a high throughput generation engine for running large language models with limited resources such as single commodity GPU. FlexGen uses a linear programming optimizer to search for the most efficient pattern to store and access tensors. By compressing these weights, and enabling larger space of batch size, FlexGen is able to increase throughput. FlexGen was able to achieve high throughput while running OPT-175B on a single 16GB GPU. 

DistilBERT, a ‘distilled version’ of BERT, a technique for NLP Pre-Training, that allows the training of any question-answering system or models using one GPU. DistilBERT is a lighter, faster and cheaper version of BERT. Maintaining over 95% of BERT’s performances, it has 40% fewer parameters and runs 60% faster. 

Advancement in the development of smaller-sized models could also result in lesser emissions owing to the reduced number of parameters used for training. Meta AI released LLaMA, a foundation model that ranges from 7B to 65B parameters. The LLaMA-13B is said to surpass GPT-3 despite being ten times smaller than it. 

Share
Picture of Vandana Nair

Vandana Nair

As a rare blend of engineering, MBA, and journalism degree, Vandana Nair brings a unique combination of technical know-how, business acumen, and storytelling skills to the table. Her insatiable curiosity for all things startups, businesses, and AI technologies ensures that there's always a fresh and insightful perspective to her reporting.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.