MITB Banner

Stop Questioning OpenAI’s Open-Source Policy

Since OpenAI is in the spotlight, the AI community is circling around it, questioning its motives and decisions behind keeping GPT-4 such a secretive model.

Share

Listen to this story

OpenAI has not opened its AI so much. While there are reasons for that, the research organisation’s cofounder, Ilya Sutskever, said, “We were wrong. Flat out, we were wrong”. The chief scientist of the company, believes that AI, or AGI, at some point will become extremely and unbelievably potent and therefore open-sourcing it might not be such a good idea. 

Amid the hype around AI models, with Google and Meta pressing into the field, Sutskever said that initially the utmost priority of the company behind not open-sourcing its model was to stay ahead of the competition. But now looking at the potency and risks posed by these open AI models, OpenAI has even more reasons to keep the technology to itself. 

Read: Doomsday Will Be Triggered By GPT-4

The company acknowledged the “chaotic” potential this technology has in its paper. The paper of GPT-4 highlights how models like these might develop “power-seeking actions” as they increasingly become “agentic” in nature and develop their own goals. OpenAI decided to dive deeper into this and gave early-access of the model to Alignment Research Center (ARC) for analysing these behaviours.

The reasons why the company is keeping the back-end of the technology with themselves sound plausible now. It is quite interesting to note that the person working on these projects that are taking the world by the storm somehow believes that these technologies might be risky.

The hallucinations that ChatGPT and other similar models have are not hidden from anyone. Laughing at the spewed out false information by the chatbot was one of the highlights of Twitter for a long time. This eventually led us to discover the potential dangers it has as well. Sutskever said, “At some point it will be quite easy, if one wanted, to cause a great deal of harm with those models”.

But for now, the hype around GPT-4 was just as big as the hype for ChatGPT. But the AI community was very disappointed after reading the 98-pages-long research paper and finding out that the paper provides bare minimum information. It does not even mention the number of parameters of the LLM

Read: Why Are Researchers Slamming OpenAI’s GPT-4 Paper?

Frustratingly put out by a lot of researchers on Twitter, the whole point of writing a research paper is to make the model reproducible. “What knowledge do researchers even gain from this?” enquired one of the researchers. The paper does not even reveal anything about the dataset that it was trained on either. 

This is another reason for the closed door policy: To avoid the legal scrutiny that companies like StabilityAI and Midjourney are facing for copyright infringement. Towards this, Sutskever has said that, “Training data is technology. It may not look this way, but it is”.

Should OpenAI Change its Name?

The AI community is divided on the issue. The debate surrounding whether AI research should be open or closed has been increasing in the community. Since OpenAI is in the spotlight at the moment, the community is circling around it and questioning its motives and decisions behind keeping GPT-4 such a secretive model.

Emad Mostaque, the founder of StabilityAI, has put out an open offer on Twitter, telling OpenAI employees that he will match the salary and benefits that they get for those who want to work on actual “open AI”. 

Well, Mostaque has been in the limelight for open-sourcing his company’s best technology, ‘Stable Diffusion’, which has made wondrous innovations around the world. But he still faces scrutiny from artists and even some AI developers about the copyright issues these text-to-image models create. 

Earlier, Elon Musk has also criticised OpenAI for working behind closed-doors, contrary to the name (OpenAI) and the purpose (open-source) it showed at the start of the company. “I’m still confused as to how a non-profit to which I donated USD 100 million somehow became a USD 30 billion market cap for-profit,” said Musk in a tweet

Meanwhile, Musk is working on an AI platform and making an “OpenAI Rival”, soon to compete with the “wokeness” and the closed-door policy they have. 

Interestingly, Meta—the company which was scrutinised for a lot of their AI products like Galactica and BlenderBot—has released LLaMA and is making bets into generative AI. Yann LeCun, the chief of AI at Meta, was always heavily concerned about the ethical implications of the generative models but now somehow decided to open source the company’s model. 

To-Open or No-To-Open, Asks OpenAI

There are several merits to open-sourcing AI models as well. Apart from driving more innovation in the community, open-sourcing allows developers to build guardrails around it and point out ways in which the model can be potentially harmful. About this, Sutskever agreed with the critics and said that if more people are allowed to study these models, the company would learn more about the problems as well. 

When Musk and Sam Altman founded OpenAI in 2015, the introduction of their blog clearly mentions that the company is a non-profit AI research company that would “build value for everyone rather than shareholders” and would focus on “freely collaborating”. But ever since the company is under the hands of Microsoft, it is quite clearly visible that the company has headed for profits and is becoming increasingly product oriented than simply furthering the research. 

This all boils down to a simple question to ask OpenAI: Do they want to contribute to developing AI or are they now getting more scared of the potential of the technology they are developing themselves? If the latter is true, then the story is scary for everyone and not just the company.

Share
Picture of Mohit Pandey

Mohit Pandey

Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.