MITB Banner

Former co-leader of Google’s Ethical AI team, Timnit Gebru, thinks that AI needs to slow down

Share

Former co-leader of Google’s Ethical AI team, Timnit Gebru, thinks that AI needs to slow down

On May 18 2021, Google CEO Sundar Pichai announced the rollout of LaMDA, a large language model (LLM) system that can chat with its users on any subject. This is an example of how language technologies are becoming enmeshed in the linguistic infrastructure of the internet, despite the unresolved ethical debates surrounding these cutting-edge systems. 

Language technologies are getting out of hand 

In December 2020, Timnit Gebru was fired from Google for refusing to retract a groundbreaking paper in which she argued that these models are prone to producing and propagating racist, sexist, and abusive ideas. Despite being the world’s most powerful autocomplete technologies, they don’t understand what they’re reading or saying, and many of the advanced capabilities they have are still only available in English. 

LLMs are prone to relegating certain professions to men and others to women; associate negative words with black people and positive words with white people; and if probed in a certain way, can encourage people to self-harm, condone genocide, or normalize child abuse. 

The danger of these systems lies in the fact that they are conversationally fluent, and it is very easy to believe that their outputs were written by other human beings. This gives them the dangerous potential of producing and promoting misinformation at a massive scale. 

Censorship of research 

Very little research is being done to understand how the flaws of LLMs could affect people in the real world or what efforts should be taken to mitigate these difficulties. Google’s firing of Gebru and her co-lead, Margaret Mitchell, underscored that the few companies that are rich enough to train and maintain LLMs have a heavy financial interest that will deter them from carefully examining its ethical implications. 

Since 2020, Google’s internal review process requires a separate level of review for “sensitive topics”. Therefore, if researchers are writing about topics such as facial recognition or the categorisation of gender, race, or politics, they have to consult Google’s PR and legal team to first look over their work and suggest changes before they can publish it. 

While many researchers turn to academia as an alternative to this, even that avenue can be riddled with concerns related to gatekeeping, harassment, and an incentive structure that doesn’t support long-term research. There are also concerns about tech companies funding AI research at academic institutions, with some researchers comparing it to how big tobacco companies used to fund research in an effort to dispel concerns about the health effects of smoking. 

AI needs to slow down 

In a recent interview with WIRED magazine, the central point that Timnit Gebru made was that AI needs to slow down. 

Gebru has witnessed the negative consequences of the hurried development of LLMs in her own life. She was born and raised in Ethiopia, where 86 languages are spoken—and nearly none of them are accounted for by mainstream language technologies. 

Despite these linguistic inadequacies, Facebook relies heavily on LLMs to moderate content globally. When the war broke out in the Tigray region of Ethiopia, the platform struggled to get a handle on the outbreak of misinformation. 

In an interview with the Wharton Business Daily, Gebru said that she is most concerned about the ‘move fast and break things’ attitude that dominates tech today. She argues that when you have the software and data available for people to download and collect data very easily and efficiently, it can be easy to forget to consider things that you should be taking into account. According to her, incentive structures have to slow down so that people can be educated on what sort of things they should be thinking about when collecting data. 

In the same podcast, Gebru claims that she has previously witnessed how good research can combat the lack of awareness regarding the speed at which technology and innovation is outpacing regulation and policy.” The 2018 paper she wrote in conjunction with Joy Buolamwini—that shed light on the disparities in commercial gender classification—played a large role in effecting a rapid and remarkable change in industry and policy. 

Clearly, research, and the awareness it brings, does have the potential to influence  the direction that AI technology takes.

Share
Picture of Srishti Mukherjee

Srishti Mukherjee

Drowned in reading sci-fi, fantasy, and classics in equal measure; Srishti carries her bond with literature head-on into the world of science and tech, learning and writing about the fascinating possibilities in the fields of artificial intelligence and machine learning. Making hyperrealistic paintings of her dog Pickle and going through succession memes are her ideas of fun.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.