Listen to this story
Imagine building products and then asking the government to regulate other similar ones. OpenAI is asking the US Senate to do exactly that. During the Senate appearance, Sam Altman asked the government to form a new agency to regulate the industry, which includes both giving and revoking permits from companies. On the other hand, he believes that OpenAI’s technology, though can be risky, is going to remain under control, and thus should be audited differently.
“Regulation of AI is essential,” said Altman. OpenAI is probably just trying to shoo away the competition, just to earn some profits, pay off its investments, and not because it is actually scared of the technology. To achieve this, the company has taken sides with the government to restrict the ease of entry into the market for other players.
While this is happening from OpenAI’s side, StabilityAI, one of the bigger proponents of open-source, have filed a letter to the US Senate, advocating f0r open models. “Open models and open datasets will help to improve safety through transparency, foster competition, and ensure the United States retains strategic leadership in critical AI capabilities,” the letter read.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Similarly, in February, amid the discussion around the EU AI Act about open source, GitHub CEO Thomas Dohmke, said that open source developers should be exempted from the AI act. He explained that, “The compliance burden should fall on companies that are shipping products,” like OpenAI, Google, or Microsoft.
Fear Mongering AI
By now, it is quite clear that OpenAI has stopped caring about “open” AI altogether. With the Senate discussion, it is clear that Altman played on the side of the government by tapping into their fears that these regulators have. A lot of them do not really understand how it works. Senator Josh Hawley said that he was there “to try to get my head around what these models can do”.
OpenAI has been paving the way all this while to come to this conclusion. Last month, the company published a blog talking about deploying AI systems safely. In the blog, the company spoke about how it is important to understand the benefits and risks of deploying such models before releasing them into the public. The company claimed that they did the same with GPT-4 for six months.
Eventually, Altman decided to put a pause on GPT-5 citing safety concerns.
At the Senate, Altman repeated the same thing. “Appropriate safety requirements, including internal and external testing prior to release”, is essential for any AI model, and the government should now watch over this as more and more open-source models are coming up. But OpenAI’s version of AI safety has been very fluffed – all words, no action. The technology was even banned in several countries.
After becoming a “for-profit” company, OpenAI’s Ilya Sutskever said, “we were wrong”, and if AI becomes potent someday, “it doesn’t make sense to be open source”. The company believes that “open-sourcing AI is not wise”.
Looks like instead of building a product which would be the moat for the company, it has decided to form a regulatory moat to ensure corporate dominance. If observed closely, this can be an anti-competitive practice. This definitely could ensure more legal and regulatory backlash for OpenAI itself.
On the other hand, it can be that OpenAI being the leader of the pack has been able to understand the potential danger better than anyone else. Take the example of OpenAI hiring a killswitch engineer, to pull the plug on the system if it ever gets out of control. It can all just be a facade as well, who knows?
Altman Got Off Easy
The big names in AI have already been against a lot of what OpenAI was doing. The closed-door approach was criticised by Elon Musk which eventually led to a petition to pause giant AI experiments beyond GPT-4, which is now signed by more than 27,000 people. Even though it wasn’t mentioned, this was clearly a move against the monopoly that OpenAI and Microsoft have over AI right now.
One of the leading petitioners along with Musk, Steve Wozniak, was Gary Marcus, who has been critical of deep learning technologies for a long time now. Interestingly enough, he was one of the people to be called at the Senate for discussion about the developing AI technology. To great surprise, a lot of ideas of Marcus and Altman regarding restricting the technology coincided with each other. The reasons can be completely different though.
To counter the pause petition, LAION (Large-scale AI Open Network), a Germany-based research organisation, filed a petition and sent a letter to the European Parliament to speed up the process of opening AI models for a “secure future”. The letter was signed by several European AI experts like Jurgen Schmidhuber.
This is similar to what StabilityAI has been asking for now. Interestingly, LAION received a lot of backlash for this as it was one of the dataset providers for Stable Diffusion, StabilityAI’s image generator for copyright infringement.
Meanwhile, amid this struggle between OpenAI and open source, LAION collaborated with Together and Ontocord.ai, to release an open source alternative to ChatGPT, called OpenChatKit.
Looks like everyone is juggling between what to do with AI and open source. But for now, OpenAI has taken up another enemy – the open source developer community. The company was anyway being criticised for filing for a trademark on ‘GPT’.
Sitting on the fence, but not because of altruism
Marcus and Altman’s collision of approach towards regulating the developing AI technology might be just a coincidence. But this should not be confused with the altruistic approach that Altman sells to the public by acknowledging the risks of the models his company is building.
Moreover, Altman has been touted for being a very conscientious guy for not acquiring any equity in OpenAI. The truth might be that he wants to stay away from the company in case anything goes wrong, while he earns money by investing in other companies through Y Combinator.
In Google’s leaked document, Google and OpenAI not having a moat, it was clearly mentioned that open source is heading for the win when it comes to the developing AI technology. While it might look like the GPT technology that OpenAI has developed might be the moat for them, and now Google’s Bard might hint towards that, it is not sure if it is actually the case.
Ever since Meta’s LLaMA got leaked, the open source community has been able to replicate what OpenAI’s technology has been able to achieve without requiring as much computational power. This clearly would appear like a threat to OpenAI. Moreover, Google’s PaLM and Hugging Face have been creating some tension for OpenAI with their open products. Moreover, the GPT-4 paper did not reveal much about its model, making researchers and developers again.
Amidst this battle whether LLM-like models should be open or closed, Meta has sided with the open community. Yann LeCun explained that companies hiding their technologies are just for commercial purposes, and keeping it closed is more dangerous.
Thus, instead of calling it out as a completely competitive move, OpenAI has taken the approach of siding with the government regulators, a lot of them who are AI doomers, and playing with their fear to regulate AI and remove competition.
Meanwhile, to get in the good books of the open source community, the company has been taking steps to open source some of its models. Earlier, the company used to extract data and models from the open community, but did not contribute anything back.
Recently, according to reports, OpenAI is releasing another open source model, which might possibly have nothing to do with GPT at all. Might be just a gimmick to call themselves “open” again because the company’s business model is based on providing proprietary models through APIs.
Meanwhile, the US can learn from the debate going on in Europe to avoid harms to the open source community that might be entailed with the drafted AI Act. Instead of imposing an all-out restriction on AI models that would include the developer community as well, the U.S. can be wise and not fall into the words of Altman. Probably they need some advocates from the open source community like Emad Mostaque, Harrison Chase, and Clem Delangue to speak at the senate, on behalf of open source.