MITB Banner

Former Google AI Ethicist Calls Bard and alike ‘Bulls***’ Generators

Moving to adopt facial recognition systems, predictive policing, ShotSpotter or other data harmonising technologies, is only going to trap the same people over and over again.
Listen to this story

“There are a lot of people freaking out about the way large language models (LLMs) are doing things like writing college essays, etc. The harm is, these things are just bullshit generators,” said former AI ethicist at Google, Alex Hanna, in an exclusive interview with Analytics India Magazine. 

The above statement comes in the backdrop of Google’s recent release of experimental AI chatbot ‘Bard’, built on large language model LaMDA, which unsurprisingly created a huge mishap as one of the facts mentioned in its ad was flat-out wrong. Now, many of its employees believe that it was being ‘rushed’ amid the wake of the popularity of ‘ChatGPT,’ backed by Microsoft, and developed by OpenAI. 

Hanna questioned the need for such models, given the cost of training these models is extremely high in addition to its impact on carbon emissions. Most importantly, she said: How is it going to serve the most marginalised people right now?

Powered by Greed 

“The big tech is currently too focused on language models because the release of this technology has proven to be impressive to the funder class—the VCs—and there’s a lot of money in it,” shared Hanna. 


She believes that there are other uses of AI that are much more prevalent. This includes using AI to support people either in welfare allocation or provide useful services instead of monetising the technology as much as possible. Hanna believes that these things (particularly ChatGPT, Bard, and others) could take away the power to discriminate economically, socially and politically. 

Abuse of data 

Further, Hanna said that there is currently an explosion of data labelling companies. For example, the TIME report that revealed how OpenAI outsourced data sourcing work to Kenyan workers for its flagship product ‘ChatGPT’. 

She said that the data used to train these models (GPT-3.5, or LaMDA) is either proprietary or just scraped from the internet. “Not a lot of attention is paid to the rights of the people in those data—also referred to as Data Subjects in the EU’s Artificial Intelligence Act—and also the people who have created those data, including artists, writers, etc.,” said Hanna, explaining that these people are not getting compensated and most companies are considering it like an afterthought. 

Lately, several artists have reported suing such organisations or trying to find some remuneration for their work. For instance, the recent case where Sarah Andersen, Kelly McK­er­nan, and Karla Ortiz dragged Midjourney, Devian Art and Stability.AI to court for using their work for data without explicit consent. 

Abuse of power 

Today, corporations claiming to be ‘AI first’ are built on the backs of underpaid workers, such as data labourers, content moderators, warehouse workers and delivery drivers. Hanna’s team is trying to assess and mitigate harms around AI along with imagining what different kinds of AI could be, especially with community input. 

For instance, Amazon puts its delivery partners and workers under surveillance in favour of getting goods delivered quickly to its customers. The internal document revealed that the company has been constantly monitoring its employees’ movements and activities which, in the name of AI and technology advancements, sounds fascinating but is an exploitation of workers, emphasised Hanna. “We are defending against the harms of technology, but we really want to expand beyond,” she added. 

The ‘DAIR’ need of the hour 

While the public is distracted by the spectre of machines and the noise these large language model chatbots are creating, an army of researchers is having discussions pertaining to ethical AI. This is where people like Hanna come into the picture. Her journey started way back in 2017, when she started getting involved with AI ethics. 

“I started focusing on the use of technologies because I’ve always had this interest in how society interacts with computing,” said Hanna. “I became disenchanted with how this stuff wasn’t really being used to serve people. It could also be used for surveillance on a massive scale.”

When she was a Senior Research Scientist in Google’s Ethical AI team, Hanna predominantly questioned the tech industry’s approach to artificial intelligence. However, over time, she became disillusioned with the company’s culture, which she deemed both racist and sexist. Hanna’s unhappiness was amplified in late 2020 when Google fired Timnit Gebru, the co-lead of the Ethical AI team.

While the episode brought about a new level of attention to her work, Hanna has made the most of this jarring opportunity. Now, she is attempting to make a change from the outside as the director of research at Distributed Artificial Intelligence Research Institute, or DAIR. The institute focuses on AI research from the perspective of the places and people most likely to experience its harm.

(Not) Getting Along 

When asked in what areas AI should be avoided, Hanna scoffed in disappointment and said, “I don’t know how to answer that question because many places have already been taken over by AI that shouldn’t be”. Much work is being done in dangerous places like child welfare, public safety, and policing. Hanna does not believe in going back to the traditional ways but also asserts that the current tech does not really provide real solutions. 

“We need to invert what these technologies are reinforcing,” she said while elaborating on the instance of policing as a racist endeavour in the US. Moving to adopt facial recognition systems, predictive policing, ShotSpotter or other data harmonising technologies, is only going to trap the same people over and over again. It’s going to provide even less context to people who are making these decisions. So, policing needs to be completely reimagined to serve people, Hanna concluded. 

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories