OpenAI has not opened its AI so much. While there are reasons for that, the research organisation’s cofounder, Ilya Sutskever, said, “We were wrong. Flat out, we were wrong”. The chief scientist of the company, believes that AI, or AGI, at some point will become extremely and unbelievably potent and therefore open-sourcing it might not be such a good idea.
Amid the hype around AI models, with Google and Meta pressing into the field, Sutskever said that initially the utmost priority of the company behind not open-sourcing its model was to stay ahead of the competition. But now looking at the potency and risks posed by these open AI models, OpenAI has even more reasons to keep the technology to itself.
Read: Doomsday Will Be Triggered By GPT-4
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
The company acknowledged the “chaotic” potential this technology has in its paper. The paper of GPT-4 highlights how models like these might develop “power-seeking actions” as they increasingly become “agentic” in nature and develop their own goals. OpenAI decided to dive deeper into this and gave early-access of the model to Alignment Research Center (ARC) for analysing these behaviours.
The reasons why the company is keeping the back-end of the technology with themselves sound plausible now. It is quite interesting to note that the person working on these projects that are taking the world by the storm somehow believes that these technologies might be risky.
Download our Mobile App
The hallucinations that ChatGPT and other similar models have are not hidden from anyone. Laughing at the spewed out false information by the chatbot was one of the highlights of Twitter for a long time. This eventually led us to discover the potential dangers it has as well. Sutskever said, “At some point it will be quite easy, if one wanted, to cause a great deal of harm with those models”.
But for now, the hype around GPT-4 was just as big as the hype for ChatGPT. But the AI community was very disappointed after reading the 98-pages-long research paper and finding out that the paper provides bare minimum information. It does not even mention the number of parameters of the LLM.
Read: Why Are Researchers Slamming OpenAI’s GPT-4 Paper?
Frustratingly put out by a lot of researchers on Twitter, the whole point of writing a research paper is to make the model reproducible. “What knowledge do researchers even gain from this?” enquired one of the researchers. The paper does not even reveal anything about the dataset that it was trained on either.
This is another reason for the closed door policy: To avoid the legal scrutiny that companies like StabilityAI and Midjourney are facing for copyright infringement. Towards this, Sutskever has said that, “Training data is technology. It may not look this way, but it is”.
Should OpenAI Change its Name?
The AI community is divided on the issue. The debate surrounding whether AI research should be open or closed has been increasing in the community. Since OpenAI is in the spotlight at the moment, the community is circling around it and questioning its motives and decisions behind keeping GPT-4 such a secretive model.
Emad Mostaque, the founder of StabilityAI, has put out an open offer on Twitter, telling OpenAI employees that he will match the salary and benefits that they get for those who want to work on actual “open AI”.
Well, Mostaque has been in the limelight for open-sourcing his company’s best technology, ‘Stable Diffusion’, which has made wondrous innovations around the world. But he still faces scrutiny from artists and even some AI developers about the copyright issues these text-to-image models create.
Earlier, Elon Musk has also criticised OpenAI for working behind closed-doors, contrary to the name (OpenAI) and the purpose (open-source) it showed at the start of the company. “I’m still confused as to how a non-profit to which I donated USD 100 million somehow became a USD 30 billion market cap for-profit,” said Musk in a tweet.
Meanwhile, Musk is working on an AI platform and making an “OpenAI Rival”, soon to compete with the “wokeness” and the closed-door policy they have.
Interestingly, Meta—the company which was scrutinised for a lot of their AI products like Galactica and BlenderBot—has released LLaMA and is making bets into generative AI. Yann LeCun, the chief of AI at Meta, was always heavily concerned about the ethical implications of the generative models but now somehow decided to open source the company’s model.
To-Open or No-To-Open, Asks OpenAI
There are several merits to open-sourcing AI models as well. Apart from driving more innovation in the community, open-sourcing allows developers to build guardrails around it and point out ways in which the model can be potentially harmful. About this, Sutskever agreed with the critics and said that if more people are allowed to study these models, the company would learn more about the problems as well.
When Musk and Sam Altman founded OpenAI in 2015, the introduction of their blog clearly mentions that the company is a non-profit AI research company that would “build value for everyone rather than shareholders” and would focus on “freely collaborating”. But ever since the company is under the hands of Microsoft, it is quite clearly visible that the company has headed for profits and is becoming increasingly product oriented than simply furthering the research.
This all boils down to a simple question to ask OpenAI: Do they want to contribute to developing AI or are they now getting more scared of the potential of the technology they are developing themselves? If the latter is true, then the story is scary for everyone and not just the company.