Timnit Gebru’s Layoff Is Yet Another Case Of Tech Companies’ Failed Attempt To Build Ethical AI

Timnit Gebru’s Layoff Is Yet Another Case Of Tech Companies’ Failed Attempt To Build Ethical AI

In recent news, Timnit Gebru, a prominent voice in the field of AI and ethics and the co-leader of Google Ethical AI team, was fired yesterday from the company.

Gebru said she was asked to retract or remove her name from a paper she had co-authored because of an internal review that had found the content to be ‘objectionable’. The paper discussed ethical issues regarding the recent advances in AI technology, that works with language, which Google said is important to the future of its business. 

Gebru’s work in ethical AI has made significant discoveries like detecting the bias in facial recognition systems, which found high disparities in recognising lighter males and darker females. Gebru has also been outspoken about the lack of diversity in the tech industry.

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

This article tries to question whether tech companies are actually serious about creating ethical AI, and with regards, the importance of protecting employees and employee diversity in tech firms.

An environment to protect employees who want to raise ethical concerns

Gebru’s incident is not a one-off. As a matter of fact, many whistleblowers in this industry, over the years, have been fired or have had to resign because of ethical concerns they raised regarding the use of data or AI applications. 


Download our Mobile App



In a similar instance — Jack Paulson who worked as a research scientist at Google resigned from his job in 2018 over the company’s (now-scrapped) plan to build a censorship AI for the Chinese search market. A whistleblower in Apple, Thomas le Bonniec, as well, had to quit Apple this year over ethical concerns as the firm kept collecting massive amounts of data, wiretapping entire populations in Europe. A Facebook employee was also fired because she kept questioning the multiple failures by Facebook to effectively deal with political misinformation.

A report published by Doteveryone, a think-tank ‘fighting for better tech’, revealed around 28% tech workers in the UK witnessed decisions that could have a negative impact on society and around 18% of them left their organisations as a result of that.

If tech companies were really serious about ethical AI, there would have been better outcomes to whistleblowing than what happened in the case of Gebru and others, where they had to resign or get fired. 

Companies who actually want to build Ethical AI should be able to regulate protecting employees who are raising ethical concerns about their algorithms. Companies need to create a safe environment for employees who want to raise such issues, safely and anonymously. 

An environment to protect diversity in employees

While some of the big tech companies have taken initiatives to ensure diversity in AI projects, overall representation of minority communities has not improved. For instance, since the tech companies started publishing diversity data in 2014, the share of US technical employees in Google and Microsoft who are Black or Latino has increased only by less than a percentage point in five years.

People are aware of this issue, and the topic of discrimination against the minority communities in tech companies has been raised time and again. Even as Gebru was fired, several people took to Twitter to write about the discrimination of minorities in tech companies with the hashtag #ISupportTimnit

Overall, it is necessary to ensure diversity in teams developing AI algorithms. This is not only important from a social point of view, but diversity in teams has also proven to be beneficial in terms of reducing biases in algorithms. It becomes crucial for tech companies, thus, to protect employees from diverse backgrounds if they want to build ethical AI.

Wrapping Up

The development of AI applications that are ethical and fair can be ensured in several ways. Companies could use third-party audits or new tools that are coming up. However, a company’s employees are in the best position to judge fairness in their algorithms, since they directly work in building them.

Thus the question remains whether tech companies are ready to prioritise ethicality by protecting their employees in order to achieve the same.

Sign up for The Deep Learning Podcast

by Vijayalakshmi Anandan

The Deep Learning Curve is a technology-based podcast hosted by Vijayalakshmi Anandan - Video Presenter and Podcaster at Analytics India Magazine. This podcast is the narrator's journey of curiosity and discovery in the world of technology.

Kashyap Raibagi
Kashyap currently works as a Tech Journalist at Analytics India Magazine (AIM). Reach out at kashyap.raibagi@analyticsindiamag.com

Our Upcoming Events

24th Mar, 2023 | Webinar
Women-in-Tech: Are you ready for the Techade

27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023 [AI100 Awards]

21 Jul, 2023 | New York
MachineCon USA 2023 [AI100 Awards]

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Council Post: Evolution of Data Science: Skillset, Toolset, and Mindset

In my opinion, there will be considerable disorder and disarray in the near future concerning the emerging fields of data and analytics. The proliferation of platforms such as ChatGPT or Bard has generated a lot of buzz. While some users are enthusiastic about the potential benefits of generative AI and its extensive use in business and daily life, others have raised concerns regarding the accuracy, ethics, and related issues.