MITB Banner

No, the Infamous 6-Month AI Pause Letter Wasn’t a Failure

Though the AI pause wasn't adopted, the letter's impact is evident, says signatory

Share

Listen to this story

Not a week has gone by without an AI industry insider trumpeting the existential risks of AI since late 2022, around the time when ChatGPT was released. In March 2023, thousands of business and AI leaders signed an open letter calling for a six-month pause on the training of AI systems more powerful than OpenAI’s GPT-4. The signatories warned that the technology could “pose profound risks to society and humanity”. The call wasn’t adopted, but the letter’s impact is evident.

One of the signatories, Olle Haggstrom, doesn’t think of the letter as a failure. “What we did was something very important, namely, we put the safety issue and the risk issue on the public agenda,” he told AIM. “Now that you and I are having this conversation, I think it is a success,” he added. 

Haggstrom has been talking about these issues for more than a decade. The issues of AI breakthroughs were at least decades away so researchers could talk about them very abstractly. But these last couple of years have seen such an incredible acceleration in AI capabilities that the situation has become very urgent, he said, pointing towards the dire need to focus on ethical and responsible AI.

The signatories did not expect an immediate six-month pause since it was not a realistic prospect. “It’s also not sufficient. But this is just something that we put on the table for concreteness, to get the discussion going,” clarified Haggstrom.

On similar lines, Michael Osborne, a professor of machine learning at the University of Oxford told AIM, “The Overton window does seem to have shifted to allowing political discussion of the harms that AI may pose, with debates having begun in many polities.” Osborne was among the thousands of signatories to call for the AI pause. Recalling the reason for signing he said, “I was concerned, and am concerned, that an increasingly-powerful technology is governed only by the Big Tech. AI’s impacts are likely to be deep and broad, across society, and must be subject to democratic governance.”

Why The Pause Failed 

The director of the Center for AI Safety, Dan Hendrycks, whose work X-Risk Analysis for AI Research was cited in the open letter, pointed out the reasons why the letter could not stop AI advancements. “It’s important to address that they [the tech companies] are caught in a situation where, if they were to pause, then their competitors would end up going ahead. Less ethically minded ones would end up doing better or getting a competitive advantage. There is the prioritisation of profit over safety from many of these companies. If they did decide to just stop with it all, I’m not sure the impact would be that positive because many others would just keep doing the same thing”, he told AIM

The researcher, who made it to the TIME AI 100 list, suggested we need some coordination mechanism or external actor to tell the companies that they need to all stop, instead of waiting for them to volunteer to stop. He pointed out that Elon Musk initially founded OpenAI to prioritise safety because Larry Page’s Google wasn’t doing that. Then it became a capped-profit company and kept racing. Anthropic which was initially people at OpenAI that didn’t like what they saw. They thought OpenAI was not taking safety seriously enough and formed their own thing. Two years later, they do the same thing as OpenAI. 

“This shows that good intentions and people taking a risk seriously won’t be enough to counteract these extreme pressures on race,” he said. “The lesson here is that we can’t get them to voluntarily pause. Maybe we should build a pause button like there’s a nuclear launch button if things start looking much more dangerous. We should buy ourselves that ability or option and not just depend on the goodwill of these different companies,” Hendrycks concluded. 

No Clear Solution

The open letter warned of an “out-of-control race” to develop machines that no one could “understand, predict, or reliably control”. It also urged governments to intervene in developing AI systems more powerful than GPT-4. It raised the question: Should we develop non-human minds that might eventually outsmart and replace mankind?

Pinpointing the role of academia in the field, Osbourne said, “The academy has a key role to play in developing AI independently of big tech—however, today, such independence is compromised by deep ties between the two. We need to find state funding for academics to replace that from big tech, to build an academy that can be trusted as an independent centre for AI development and criticism.”  

Osbourne has co-founded a new Initiative at Oxford, the AI Governance Initiative, to research governance approaches and to engage with policy formation. “I believe AI governance is a central issue for our age,” he concluded. 

OpenAI recently announced the super alignment project. The idea boils down to, ‘we observe that advanced AI is dangerous, so we build an advanced AI to fix this problem’. Haggstrom believes it may work, but it’s a dangerous leap out in the dark. He suggests increasing the momentum of this emerging movement. “We’ve seen that humans are not able to eradicate bias and other unwanted behaviours in these language models. And to the extent that reinforcement learning with human feedback does work on current models, there are strong reasons to expect it to break down completely with future more capable models,” the professor of mathematical statistics at Chalmers University of Technology, explained. 

“We can have a data set consisting of the nicest, most politically correct things that you can imagine and this is still going to be dangerous because we have no control over what goes on inside these AI models. When they are released out in the wild, they will inevitably encounter new situations which are outside the distribution of the training data. It’s not just that I don’t know, even the developers are nowhere near understanding and being in control of this,” said Haggstrom, concerned. 

“Tomorrow’s AI poses risks today,” he further stated, recalling a piece in Nature. “It is an unfortunate part of current AI discourse that people contrast near-term AI risk with long-term AI risk. Now that we understand that the maximally dangerous breakthrough, maybe not decades away, these two groups need to unite. They should work together against the leading AI developers, rushing forward almost blindly with no control of their models. Neither of these groups of AI ethicists are getting what they want,” Haggstrom concluded.

Share
Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.