MITB Banner

Big Tech Companies are Not Even Trying to Build Better AI

The latest State of AI report shows the lack of researchers building AI responsibly

Share

Listen to this story

When dealing with high-tech giants, the notion of voluntary commitment to building AI responsibly is a joke. Google and Amazon’s union-busting attempts, Meta’s Cambridge Analytica scandal, and Microsoft and OpenAI’s copyrights violations are only the tip of the iceberg showing voluntary commitments should not be expected, for better results. 

The issues with AI models keep several researchers up at night. Swedish philosopher Nick Bostrom is among the ones to ponder, “How do we ensure that highly cognitively capable systems — and eventually superintelligent AIs — do what their designers intend for them to do?”. Bostrom has delved deeper into the unsolved technical problem in his book ‘Super Intelligence’ to draw more attention to the subject. 

An infamous example of misalignment: When an algorithm by Google was trained on a data set of millions of labelled images, it was able to sort photos into categories as fine-grained as “Graduation” — yet classified people of colour as “Gorillas”. But, why do these issues continue to persist despite companies’ “efforts” in building better algorithms and models time and again? 

Irene Solaiman, policy director at Hugging Face, believes that a part of the trouble of alignment issues is the lack of consensus on what constitutes the field of value alignment. “Generally a definition is following developer intent, but so many peoples affected are not reflected in developer teams,” she added. 

Solaiman, who was a part of the GPT-2 and GPT-3 teams, is a strong proponent of inclusive value alignment that recognizes asymmetries in who is affected. For example, she said, “I would consider harmful stereotyping and biases part of alignment work. The number of researchers differs based on how we consider technical and social scientists and how close they are to model development and release.”

The State of Aligned AI

The latest State of AI report also points out the lack of researchers in AI actively working on preventing these models from being too misaligned. Cumulatively, there is a tiny group — across seven lead organisations of less than a hundred researchers, an extremely tiny fraction of the AI research community worldwide. 

As per the report, Google DeepMind has the largest and most established AI alignment team of 40 members led by co-founder Shane Legg. In comparison, OpenAI has a team of 11 members, and its rival startup Anthropic has 10. 

OpenAI recently formed a team called ‘Preparedness‘ to assess, evaluate and probe AI models to protect against what it describes as “catastrophic risks”. A few months ago, it also announced a super alignment project. The idea boiled down to, ‘we observe that advanced AI is dangerous, so we build an advanced AI to fix this problem’.

Professor Olle Haggstrom, who was one of the signatories to call for a 6-month pause on building more advanced AI than GPT-4, called it ‘a dangerous leap out in the dark’. “Tomorrow’s AI poses risks today,” he emphasised, citing insights from a Nature article.

In the ongoing AI discussions, a regrettable divide persists between those who focus on immediate AI risks and those who consider longer-term dangers. However, with growing awareness that a maximally dangerous breakthrough might not be decades away, these two factions should join forces, Haggstrom advises.

“Neither of these groups of AI ethicists are getting what they want,” he said while noting the lack of focus on building aligned AI. 

Just a Label

While DeepMind has advocated for safer, aligned models, since day one. Its counterpart Google has other plans. The tech titan recently pledged $20 million for a responsible AI fund. The search giant made $60 billion in profit in 2022 which means 0.0003% of their profit is directed towards building better models. 

Google has been carrying around the ‘bold and responsible’ label since its executives have not shut up about generative AI since 2023 began. The new mantra has been repeated over and over in all the conferences held from Mountain View to Bengaluru.

But it’s not the only one. For the majority of big tech companies being “responsible” has become a tactic to stay in the good books of the media and communities. Behind the back, their shenanigans continue showing that organisations akin to Google and OpenAI do not genuinely care about ethics over profitability.

Share
Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India