A lot has been said about the capabilities of artificial intelligence, from humanoid robots, self-driving cars to speech recognition. However, one aspect of AI that often doesn’t get spoken about is its carbon footprint. AI systems consume a lot of power, and resultant of this generate large volumes of carbon emissions that harm the environment and further accelerate climate change.
AI Carbon Footprint
It is interesting to note the duality of AI in terms of its effect on the environment. On the one hand, it helps in devising solutions that can reduce the effects of climate and ecological change. Some of which include — smart grid design, development of low-emission infrastructure, and climate change predictions.
Sign up for your weekly dose of what's up in emerging technology.
But, on the other hand, AI has a significant carbon footprint that is hard to ignore.
For instance, in a 2019 study, a research team from the University of Massachusetts had analysed several natural language processing training models. The energy consumed by these models was converted into carbon emissions and electricity cost. It was then found that training an AI language-processing system generates an astounding 1,400 pounds (635 kg) of emission. The study further noticed that this number can even reach up to 78,000 pounds (over 35,000 kg) depending on the scale of the AI experiment and the source of power used. This is equivalent to 125 round trip flights between New York and Beijing.
Notably, the centre of the whole Timnit Gebru-Google controversy is also a study titled, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” This paper, co-authored by Gebru raised questions about AI language models being too big, and whether tech companies are doing enough to reduce the arising potential risk. Apart from shining light on how such models perpetually create abusive languages, hate speeches, stereotypes, and other microaggressions towards specific communities, the paper also spoke of the AI’s carbon footprint and how it disproportionately affects the marginalised communities, much more than any other group of people.
The paper pointed out that the number of resources required to build and sustain such large models only benefitting the large corporations and wealthy organisations, but the resulting repercussions of climate change were borne by the marginalised communities. “It is past time for researchers to prioritise energy efficiency and cost to reduce negative environmental impact and inequitable access to resources,” the paper said.
This OpenAI graph also shows how since 2012, the amount of computing power in training some of the largest models such as AlphaZero has been increasing exponentially with a 3.4 month doubling time. This is higher than Moore’s law two-year doubling period.
Small Steps In Right Direction
To address this issue, in September 2019, employees of tech giants such as Amazon, Google, Microsoft, Facebook, and Twitter, have joined the brimming worldwide march against climate change and demanded from their employers to issue an assurance towards reducing emissions to zero by 2030. This would require them to cut contracts with fossil fuel companies as well as stop the exploitation of climate refugees. In a very strong-worded demand that called out ‘Tech’s dirty role in climate change’, the coalition had written that “the tech industry has a massive carbon footprint, often obscured behind jargon like ‘cloud computing’ or ‘bitcoin mining’, along with depictions of code and automation as abstract and immaterial.”
Considering the growing conversation around climate change, a movement called Green AI was also started by the Allen Institute of Artificial Intelligence through their research. This paper proposed undertaking AI research that yields desired results but without increasing computational cost, and in some cases, even reducing it. As per the authors of this paper, the goal should be to make AI greener and inclusive, as opposed to Red AI that currently dominates the research industry. Red AI has been referred to the research practices that use massive computational power to obtain state-of-the-art results in terms of accuracy and efficiency.
In a 2019 paper, co-founders of AI Now Institute, Roel Dobbe and Meredith Whittaker, gave seven recommendations that could help draft a ‘tech-aware climate policy and climate-aware tech policy’. They included —
- Forcing companies to provide full energy and carbon transparency.
- Accounting for the full-stack supply chain, including all the steps from mining minerals for the chips to waste produced by consumer gadgets.
- Understanding rebound effects and ensuring no increase in fossil fuel consumption.
- Calculating energy and climate impacts of AI as a standard part of the policy practice.
- Integrating tech regulation and green deal policymaking.
- Curbing the use of AI for fossil fuel extraction.
- Identifying how AI harms and excludes climate refugees.
There is a lot to be done on recognising, understanding, and acting against the implications of AI carbon footprint. An ideal situation would be bigger tech companies to take the first step in the direction for others to follow.