Google removed “don’t be evil” from its code of conduct in the first half of 2018. In retrospect, the move anticipated the sequence of events culminating in the firing of Google’s leading ethical AI researchers, Timnit Gebru and Margaret Mitchell.
It all started with Gebru & Mitchell’s paper on the dangers of large language models like GPT-3 and BERT. The paper spotlighted these language models’ perils, including environmental costs, financial costs, opportunity cost, and the risk of biases like racism, stereotyping, wrongful arrests, etc. Gebru popped the critical question, “how big is too big”, much to Google’s disgruntlement.
The conversations around ethics are gaining traction as AI and ML usher in the fourth industrial revolution. Google’s indiscriminate firing of Gebru and Mitchell drew flak from the industry. In related news, a conference on AI transparency, FAccT, recently ended its sponsorship relation with Google.
Why Should Google Be Concerned?
Google has been investing heavily in machine learning and sponsoring some of the key ML conferences worldwide. It’s in the best interest of tech companies like Google to be in good terms with researchers, industry experts, and academics to push R&D and innovations with new tech. The recent controversies have forced researchers and stakeholders to rethink their partnership with the tech giant, introducing frictions in the company’s progress on machine learning research.
Case in point — a research fellow at the Center for Applied Data Ethics, told the media how the controversy “undermines all of the company’s research.” He believes Google has its task cut out to win back its credibility from researchers, inside and outside the company.
Another expert from New York University’s AI Now institute said, while “academic norms” are key to these machine learning researches, Google has always prioritised “bottom line” over “knowledge production.”
Scott Niekumte, the director of a robotics lab at the University of Texas at Austin, boycotted Google’s recent workshop, saying the tech giant needs to strictly reconsider its stance on ethics. Else it will lead to more academics, researchers and experts severing their partnership with Google.
Hadas Kress-Gazit, a robotics professor at Cornell University, has been vocal about his disapproval of Google’s policies. In a recent tweet, he spoke about his withdrawal from the Google ML and Robot Safety Workshop.
This isn’t the first time Google has come under scrutiny for its AI tech and ethical issues. In 2015, the company was interrogated for its racist Photos app, which later got blocked. In 2018, the company employees protested against the Pentagon using its technology to analyse drone images to target ISIS. Google, soon after that, released a set of ethical principles on AI use. However, the recent conflicts exposed the faultlines in Google’s commitment to responsible AI.
Besides researchers and academics, the US Congress has asked Google to explain the firing of Gebru. In turn, this forced Google and other tech companies to reconsider algorithm accountability and ways to mitigate the bias involved. Earlier this year, Google, Facebook and Open AI were issued warning to set standards on ethical AI before launching products.
Inarguably, Google’s reputation has taken a hit in the wake of recent controversies. Especially, the biases in machine learning technology have been in the eye of the storm. The sacking of Google’s AI lead has forced the entire industry to put forward their views on equitable regulations for artificial intelligence. On the other hand, Marian Croak’s appointment as the new AI lead and Sundar Pichai’s memo to the employees post Gebru’s firing signal a change in tack in Google’s approach towards promoting equity and diversity inside the company.