Advertisement

Former co-leader of Google’s Ethical AI team, Timnit Gebru, thinks that AI needs to slow down

Former co-leader of Google’s Ethical AI team, Timnit Gebru, thinks that AI needs to slow down

On May 18 2021, Google CEO Sundar Pichai announced the rollout of LaMDA, a large language model (LLM) system that can chat with its users on any subject. This is an example of how language technologies are becoming enmeshed in the linguistic infrastructure of the internet, despite the unresolved ethical debates surrounding these cutting-edge systems. 

Language technologies are getting out of hand 

In December 2020, Timnit Gebru was fired from Google for refusing to retract a groundbreaking paper in which she argued that these models are prone to producing and propagating racist, sexist, and abusive ideas. Despite being the world’s most powerful autocomplete technologies, they don’t understand what they’re reading or saying, and many of the advanced capabilities they have are still only available in English. 

LLMs are prone to relegating certain professions to men and others to women; associate negative words with black people and positive words with white people; and if probed in a certain way, can encourage people to self-harm, condone genocide, or normalize child abuse. 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

The danger of these systems lies in the fact that they are conversationally fluent, and it is very easy to believe that their outputs were written by other human beings. This gives them the dangerous potential of producing and promoting misinformation at a massive scale. 

Censorship of research 

Very little research is being done to understand how the flaws of LLMs could affect people in the real world or what efforts should be taken to mitigate these difficulties. Google’s firing of Gebru and her co-lead, Margaret Mitchell, underscored that the few companies that are rich enough to train and maintain LLMs have a heavy financial interest that will deter them from carefully examining its ethical implications. 


Download our Mobile App



Since 2020, Google’s internal review process requires a separate level of review for “sensitive topics”. Therefore, if researchers are writing about topics such as facial recognition or the categorisation of gender, race, or politics, they have to consult Google’s PR and legal team to first look over their work and suggest changes before they can publish it. 

While many researchers turn to academia as an alternative to this, even that avenue can be riddled with concerns related to gatekeeping, harassment, and an incentive structure that doesn’t support long-term research. There are also concerns about tech companies funding AI research at academic institutions, with some researchers comparing it to how big tobacco companies used to fund research in an effort to dispel concerns about the health effects of smoking. 

AI needs to slow down 

In a recent interview with WIRED magazine, the central point that Timnit Gebru made was that AI needs to slow down. 

Gebru has witnessed the negative consequences of the hurried development of LLMs in her own life. She was born and raised in Ethiopia, where 86 languages are spoken—and nearly none of them are accounted for by mainstream language technologies. 

Despite these linguistic inadequacies, Facebook relies heavily on LLMs to moderate content globally. When the war broke out in the Tigray region of Ethiopia, the platform struggled to get a handle on the outbreak of misinformation. 

In an interview with the Wharton Business Daily, Gebru said that she is most concerned about the ‘move fast and break things’ attitude that dominates tech today. She argues that when you have the software and data available for people to download and collect data very easily and efficiently, it can be easy to forget to consider things that you should be taking into account. According to her, incentive structures have to slow down so that people can be educated on what sort of things they should be thinking about when collecting data. 

In the same podcast, Gebru claims that she has previously witnessed how good research can combat the lack of awareness regarding the speed at which technology and innovation is outpacing regulation and policy.” The 2018 paper she wrote in conjunction with Joy Buolamwini—that shed light on the disparities in commercial gender classification—played a large role in effecting a rapid and remarkable change in industry and policy. 

Clearly, research, and the awareness it brings, does have the potential to influence  the direction that AI technology takes.

More Great AIM Stories

Srishti Mukherjee
Drowned in reading sci-fi, fantasy, and classics in equal measure; Srishti carries her bond with literature head-on into the world of science and tech, learning and writing about the fascinating possibilities in the fields of artificial intelligence and machine learning. Making hyperrealistic paintings of her dog Pickle and going through succession memes are her ideas of fun.

AIM Upcoming Events

Regular Passes expire on 3rd Mar

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Early Bird Passes expire on 17th Feb

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, Virtual
Deep Learning DevCon 2023
27 May, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
AIM TOP STORIES

RIP Google Stadia: What went wrong?

Google has “deprioritised” the Stadia game streaming platform and wants to offer its Stadia technology to select partners in a new service called “Google Stream”.