Listen to this story
|
“One thing that has been annoying me is the myth that big tech companies are the main voices calling for regulation,” Sören Mindermann recently told AIM in an interview.
Mindermann is currently a postdoc with computer scientist Yoshua Bengio at MILA working on AI safety. Even though he just finished his PhD at University of Oxford, in machine learning he wants to stay focused on AI safety and risk.
“They’re just getting the most attention. But many big tech firms like Meta and IBM are denying risks and lobbying against regulation with a clever lie. They pretend that it’s only other companies who call for regulation, calling it `regulatory capture’. There’s actually an emerging academic consensus, calling for regulation and acknowledging real risks.” he stated.
The AI researcher wrote his first paper on AI safety seven years ago. “I’ve had detours into scaling, deep learning, and statistical modelling for COVID, but my focus has always been on safety. Suddenly it is becoming such a big deal and a bit sooner than I even expected it to be. I thought we’re gonna need a lot of time to prepare for these problems,” he said.
A month ago, Mindermann published a paper alongside 22 academic co-authors from US, China, EU, and UK— including Geoffrey Hinton, Stuart Russell and Bengio. The AI insiders called for immediate action, proposing that companies working on AI systems allocate at least one-third of their resources to ensure AI safety and ethical use.
“This paper started with us noticing that there are many AI academics, including the most cited people in the field, who are worried about the risks and the technologies it’s posing,” said Mindermann. As of now he is focused on AI honesty projects.
“People aren’t always going to be able to tell if what the AI says is true. So, we developed a lie detector for language models that can tell with reasonably high accuracy, whether AI output is true or not,” he revealed.
Lack of Focus, Knowledge and Researchers
Mindermann knows the safety teams working at Google and OpenAI. “The last time I checked, they were a tiny part of their overall research teams talking to one little safety team,” he pointed out. The 2023 State of AI report mentioned the same in numbers.
As per the report, Google DeepMind has the largest and most established AI alignment team of 40 members led by co-founder Shane Legg. In comparison, OpenAI has a team of 11 members, and its rival startup Anthropic has only 10.
But the companies are not to be blamed solely for the sad state of affairs. “All the companies want to stay ahead of the others, cut corners on safety and make profits from AI while letting society deal with risks. That’s why the governments need to intervene. In addition to the competition we have a lack of awareness of the risks among AI developers,” noted Mindermann.
He further elaborated that it’s not a part of the job description of an AI researcher to understand the risks AI poses.
“No one really knows what AI is causing in sensitive domains and the regulation so far is reactive after something bad happens. It could turn out similar to Chernobyl where after a big accident happened, the nuclear industry was largely shut down. Some AI companies are calling for regulation partly because they don’t want something to happen to the AI industry,” he mentioned.
Not Keeping Up with the Pace
“Regulation is central but it is too slow considering the rate at which AI is progressing,” the AI researcher suggested. Similar thoughts echoed in the AI community eight months ago when thousands of business and AI leaders signed an open letter calling for a six-month pause on the training of AI systems more powerful than OpenAI’s GPT-4.
While the call was not implemented, it was not considered a failure either because AI safety finally made it to the public agenda. Mindermann suggested that we need some immediate detailed commitments from companies before they train the next generation of AI systems.
“If they have a level of dangerous capability the governments will be able to evaluate then the companies can commit to the safety measure, including not deploying the system or not developing it any further if they haven’t got the safeguards ready, ” he concluded.