Listen to this story
|
Some of the most biggest names in tech including Elon Musk, deep learning pioneer Yoshua Bengio, Apple co-founder Steve Wozniak and Stability AI CEO Emad Mostaque have signed an open letter calling for a six-month pause in training new AI systems that are more powerful than OpenAI’s GPT-4 warning against ‘profound risks to society and humanity’.
a big deal: @elonmusk, Y. Bengio, S. Russell, @tegmark, V. Kraknova, P. Maes, @Grady_Booch, @AndrewYang, @tristanharris & over 1,000 others, including me, have called for a temporary pause on training systems exceeding GPT-4 https://t.co/PJ5YFu0xm9
— Gary Marcus (@GaryMarcus) March 29, 2023
Issued by a non-profit organisation called ‘Future of Life Institute‘, the signatories include hundreds of other AI and tech heavyweights such as Gary Marcus, professor emeritus at New York University, who has been often critical of AI’s dependence on LLMs and deep learning. The letter cited OpenAI’s recent statement around AGI which stated, “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models,” saying the time had come to sit back and review.
Reuters was able to verify that 11 of the top signatories including Musk, Mostaque and Gary Marcus. Marcus shared the letter on Twitter calling it ‘big news’. Interestingly, Yann LeCun, Meta’s Chief AI Scientist has refrained from adding his name to the list. LeCun tweeted saying he ‘disagreed with the premise’ of the letter.
Nope.
— Yann LeCun (@ylecun) March 29, 2023
I did not sign this letter.
I disagree with its premise. https://t.co/DoXwIZDcOx
The letter comes only two weeks after the public release of OpenAI’s GPT-4, a multimodal AI model which they have called their ‘most advanced system’. As AI research moves at a breakneck pace, the letter aligns with the sentiment echoed by groups within the community who have recently called for regulation in the space.
The letter asks that AI labs and experts build a set of safety protocols that are then ‘rigorously audited and overseen by independent outside experts’. It clarifies that while ‘this does not mean a pause on AI development in general’, it just wants entities presumably like OpenAI to ‘step back from the dangerous race’ that is ongoing in AI.