MITB Banner

Linguist Emily M. Bender Has a Word or Two About AI

The handful of very wealthy (even by American standards) tech bros are not in a position to understand the needs of humanity at large," she argues.

Share

“Who are they to be speaking for all of humanity?,” asked Emily M. Bender, raising the question to the tech companies in a conversation with AIM. “The handful of very wealthy (even by American standards) tech bros are not in a position to understand the needs of humanity at large,” she bluntly argued.

The vocal, straightforward, and candid computational linguist is not exaggerating as she calls out the likes of OpenAI. Currently, Sam Altman is trying to solve issues of humanity, which include poverty, hunger, and climate catastrophes through AI tools like ChatGPT, which has been developed in Kenyan sweatshops, got sued for violating privacy laws, continues to pollute the internet and is a source of misinformation.

“I would love to see OpenAI take accountability for everything that ChatGPT says because they’re the ones putting it out there,” she said without hesitation, even though it has been long debated who should bear the blame – developers or users, when technologies backfire. 

She sternly adds that they are the ones who set up the means to spill synthetic information into the ecosystem. So far, there’s no accountability for that, and there should be,” demanded the 50-year-old professor who resists being called an AI researcher. Why? “Because I’m not interested in the project of building AI. But I collaborate with many people who fit under the umbrella of AI ethics,” the industry critique clarified. 

Bender has been long busting the over promises of AI, but she believes in the value of language technology. (Even though she has never used ChatGPT!)

“There’s a lot of speakers and people using [language technology]. When designing technology, we need to think about who we are designing it for,” she stated. If someone has built a text or speech system for American English, they should not be trying to sell that system in the Indian market. That doesn’t make sense. She thinks machine translation can be handy when there’s transparency about what it’s good at and not good at. 

Bender pointed out that one of the issues with machine translation is that if the output sounds fluent, the user is more likely to think that it’s also accurate. “Those two things don’t necessarily correlate,” she highlighted. 

Language technology can be beneficial in search engines. We still need search engines where you can prompt in any natural language. “I would like to find bread recipes without salt and have it understand what the without means and respect that in the query because that’s not how search engines are built,” Bender (who has written about the subject) frowned. 

There’s a lot of work to be done here and a lot of value. But there’s all this energy and focus on large language models, which were labelled stochastic parrots in a paper Bender co-authored. They called the technology so because it spits out plausible-sounding text with no basis in any communicative intent.

Regulating Actions

“You can have all the good intentions in the world, but you’re not going to get very far until there’s some regulation that protects the rights that the profit motive runs roughshod over,” Bender dropped another truth bomb. 

“These are rights of people as individuals and also as communities,” she continued and suggested having better regulations, and regulated dialogues around ethics. 

The companies selling AI have a meaningful seat at the table of law. Hence, Bender has been talking to policymakers from the United Nations and its financial arm, the International Monetary Fund. Over time, she has discovered that the problem is that regulators are also hearing from the technologists, and the lobbyists are paid to “make sure the regulation doesn’t get in the way.”

In the US context, she often hears regulators saying they have to make sure not to hinder innovation. After that, they will consider people’s rights. Most of Bender’s work reminds them [the regulators] that they’re there to protect the rights of the people, not the rights of the corporations.  

People who question the authorities don’t want to inhibit innovation, she clarified. “We want to push all of this energy and capital spent on AI models in the directions that benefit people and not trample their rights,” she explained, having worked with superstar ethicist Timnit Gebru, whom Google fired for pointing out the problems of in-house AI models.  

“Regulation can channel innovation in that way,” suggests Bender. She continues to explain that the penalties by regulators must be big enough to dissuade the companies from malpractices. 

When she was a kid, one of the environmental issues was the hole in the ozone layer. “It’s not a problem anymore because we created regulations,” she explained. Probably, innovation came up with other solutions, and it is not a problem anymore, she added. 

The second hurdle is that it is difficult to have conversations about ethics because people ask, “Why are you telling me what to do?” One way to circumvent that kind of objection is by acknowledging their good intentions and discussing the possible impacts of this technology, Bender suggested. 

Reminiscing the AI-Free Google

Bender recalled that Cory Doctorow coined the term ‘enshitiffication‘, where “you have a decent platform, and then its owners make it worse for everybody else involved. We’re (definitely) seeing that with Google search results.”

In the original Google paper about PageRank, they said that if you add an advertising incentive to this, it will not work. Then it became an advertising company,” Bender sighed. 

“Back when it started, it was an effective way to find information on the web. Then, between their advertising incentives and search engine optimization, it got harder and harder to use. The condition is worsening further, with Google and others putting out synthetic stuff all over the place. It’s getting even harder to find good information,” Bender said. 

She is inspired by the work of Sophia Noble, the author of Algorithms of Oppression, who asks us to think about information access as a public good rather than something that should be in private interests. 

“I don’t think we have to believe the tech companies that just because this is the path we’ve been on, it’s the only path,” Bender rightly concluded. 

Share
Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.