MITB Banner

Watch More

‘Pause of Giant AI Experiments’ is A Misguided Approach

Instead of pausing giant AI experiments, let’s build a CERN for AI
Listen to this story

Recently, the petition to decelerate giant AI experiments and pause the training of models more powerful than GPT-4 was put forth by AI researchers, including Elon Musk, and Yoshua Bengio, among others. The open letter has so far received about 20,000 signatures. Interestingly, apart from gathering researchers who consider AI a potential threat to humanity, the letter, detractors feel, may merely be a call to prevent OpenAI from getting miles ahead in the AI race.

Balancing this out, LAION (Large-scale AI Open Network), a German non-profit AI research organisation, has launched a petition to call for opening AI models for a “secure digital future”. In this letter, the organisation calls out the previous petition as a “misguided approach” and that it might be detrimental to both the objectives. Instead, we should speed up the process of AI innovation. 

The most important goal of this new petition is to build a publicly funded supercomputer for open sourcing AI research for international development, what they call — CERN for open source large scale AI research and its safety. “Establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models,” read the letter, which now has more than 2,500 signatories, which is 1/4th of its target of 10,000.

The Good Side

The earlier petition for a pause on training AI models like GPT-4 caused a lot of misinformation among people. Some thought that the petition called for a complete ban on AI research. But on the brighter side, it sparked a conversation among the AI community about the need for putting guard rails on the fast-paced growth that we see in the AI industry, which is especially led by companies like OpenAI and Google.

In a podcast with The New York Times, Sundar Pichai expressed similar views. He said that though he might have disagreements with the details of Musk’s petition, he agrees with the spirit being brought out in the document. “AI is too important an area to regulate. It’s also too important an area not to regulate. So I’m glad these conversations are underway,” he said. 

Written by LAION members – Christoph Schuhmann, Huu Nguyen, Robert Kaczmarczyk, and Jenia Jitsev – the new petition might sound like a counter to the earlier one, but is in some ways similar to it. It explains how securing the independence of academia and government institutions by taking away the monopoly on AI research from large corporations like Microsoft, OpenAI, and Google, is the need of the hour.

“Technologies like GPT-4 are too powerful and significant to be exclusively controlled by a select few,” reads the petition.

That holds true. Globally, we are witnessing that corporations, governments, and educational institutions are increasingly relying on innovations by companies like OpenAI and Google. The petition says these innovations are “driven by short-term profit interests and act without properly taking democratic institutions into their decision-making loop”.

Moreover, LAION also believes that a large foundational model accessible to all using the supercomputer will open up substantial benefits to small and medium-sized companies, globally. But there are problems with this petition as well. As much as it aims to democratise AI research, the bid to make “CERN for AI” is still limited to the EU, the UK, Canada, and Australia

Still, if we read the comments on the petition, small researchers are actually optimistic about what it promises — open and democratic access to AI models. 

The Bad and The Ugly

Just like its predecessor, this petition too has been receiving a lot of backlash from the community. Apart from actual concerns about what the letter says, the comments have been driven by what LAION has done in the past. 

LAION has been at the receiving end of a lot of controversy for building the copyright image dataset used in image-generation models like Stable Diffusion and Midjourney. The petition has a pros and cons section in it where people have been bashing the research organisation and are cynical about what this petition actually aims to achieve. 

Twitter users have pointed out that since LAION data has been unethically scraped off the internet without giving any due credit to the original source, it is very hard to trust that the petition is actually motivated towards forming a safety-minded body, or is just a marketing gimmick to cash in on the raging debate around the potential risk of AI models. 

There are more reasons to question if the petition will achieve what it aims to. Though it is important to take the power away from a few big-tech companies for building AI, it is also important to consider the problems that come along with open-sourcing such technology, and LAION is definitely not new to this. 

The LAION-fed Stable Diffusion image generator was open-sourced by Stability AI. Though the company claims that it does not allow the generation of problematic and NSFW images, developers have found ways to make that happen. 

This shows that wide public adoption of such technology, which the petition itself recognises as potentially risky, might be just adding to its safety. Though the letter touts that making technologies like GPT-4 publicly available might allow assessing and addressing of problems easier, the opposite is also very likely to happen.

Two Sides of a Coin

Yann LeCun, one of those who opposed the Musk-led petition for pausing building on GPT-like AI models, might actually agree with this new bid by LAION. In a conversation with Andrew Ng, he expressed how people worried about AGI are misguided, and though there are real risks of harm now, these need to be solved with research.

LeCun and Ng talk about how regulating research based on possible future harm is a bad idea, and regulating products might actually be the right way forward. LeCun has always believed that auto-regressive models will lead to nowhere in the future of AI, but still calls GPT-4 amazing, and says that building better than it is also equally important. 

Musk might disagree again, just like in a recent spat with LeCun on Twitter on the same topic, where LeCun compared AI safety with that of an aircraft. “Why should AI engineers be more scared of AI than aircraft engineers were scared of flying?” he said, to which Musk replied that aeroplanes used to crash frequently until the FAA brought in regulations for aircraft makers to not cut corners on safety.

Well, we might probably keep seeing debates like these rage for six more months now. But one thing is for sure, both sides are concerned about the safety of AI research, just the approach is different. 

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Mohit Pandey

Mohit Pandey

Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories