MITB Banner

‘Big Tech’s AI Regulation Talk Doesn’t Match Their Actions’

Sören Mindermann is currently a postdoc with computer scientist Yoshua Bengio at MILA, working on AI safety.

Share

Listen to this story

“One thing that has been annoying me is the myth that big tech companies are the main voices calling for regulation,” Sören Mindermann recently told AIM in an interview.

Mindermann is currently a postdoc with computer scientist Yoshua Bengio at MILA working on AI safety. Even though he just finished his PhD at University of Oxford, in machine learning he wants to stay focused on AI safety and risk.

“They’re just getting the most attention. But many big tech firms like Meta and IBM are denying risks and lobbying against regulation with a clever lie. They pretend that it’s only other companies who call for regulation, calling it `regulatory capture’. There’s actually an emerging academic consensus, calling for regulation and acknowledging real risks.” he stated.

The AI researcher wrote his first paper on AI safety seven years ago. “I’ve had detours into scaling, deep learning, and statistical modelling for COVID, but my focus has always been on safety. Suddenly it is becoming such a big deal and a bit sooner than I even expected it to be. I thought we’re gonna need a lot of time to prepare for these problems,” he said.

A month ago, Mindermann published a paper alongside 22 academic co-authors from US, China, EU, and UK— including Geoffrey Hinton, Stuart Russell and Bengio. The AI insiders called for immediate action, proposing that companies working on AI systems allocate at least one-third of their resources to ensure AI safety and ethical use.

“This paper started with us noticing that there are many AI academics, including the most cited people in the field, who are worried about the risks and the technologies it’s posing,” said Mindermann. As of now he is focused on AI honesty projects.

“People aren’t always going to be able to tell if what the AI says is true. So, we developed a lie detector for language models that can tell with reasonably high accuracy, whether AI output is true or not,” he revealed.

Lack of Focus, Knowledge and Researchers

Mindermann knows the safety teams working at Google and OpenAI. “The last time I checked, they were a tiny part of their overall research teams talking to one little safety team,” he pointed out. The 2023 State of AI report mentioned the same in numbers.

As per the report, Google DeepMind has the largest and most established AI alignment team of 40 members led by co-founder Shane Legg. In comparison, OpenAI has a team of 11 members, and its rival startup Anthropic has only 10.

But the companies are not to be blamed solely for the sad state of affairs. “All the companies want to stay ahead of the others, cut corners on safety and make profits from AI while letting society deal with risks. That’s why the governments need to intervene. In addition to the competition we have a lack of awareness of the risks among AI developers,” noted Mindermann.

He further elaborated that it’s not a part of the job description of an AI researcher to understand the risks AI poses.

“No one really knows what AI is causing in sensitive domains and the regulation so far is reactive after something bad happens. It could turn out similar to Chernobyl where after a big accident happened, the nuclear industry was largely shut down. Some AI companies are calling for regulation partly because they don’t want something to happen to the AI industry,” he mentioned.

Not Keeping Up with the Pace

“Regulation is central but it is too slow considering the rate at which AI is progressing,” the AI researcher suggested. Similar thoughts echoed in the AI community eight months ago when thousands of business and AI leaders signed an open letter calling for a six-month pause on the training of AI systems more powerful than OpenAI’s GPT-4.

While the call was not implemented, it was not considered a failure either because AI safety finally made it to the public agenda. Mindermann suggested that we need some immediate detailed commitments from companies before they train the next generation of AI systems.

“If they have a level of dangerous capability the governments will be able to evaluate then the companies can commit to the safety measure, including not deploying the system or not developing it any further if they haven’t got the safeguards ready, ” he concluded.

Share
Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.