Listen to this story
Meta has been open sourcing almost all of their creations. Undoubtedly, it has been good for the company. Ever since the release of LLaMA, Meta has been touted as the open source champion. Now, with Llama 2 and Code Llama, the praise for the company from the developer ecosystem has been at an all time high. But it is quite interesting how Meta does not care about anything apart from gathering all the praises.
Recently, Jason Wei from OpenAI, posted on X with his alternative account what he “overheard at a Meta GenAI social” event. He said that the company plans to build Llama 3 and 4, expected to be much more powerful than GPT-4. “Wow, if Llama-3 is as good as GPT-4, will you guys still open source it?,” someone asked. To which, the person from Meta AI said that it would be, adding, “Sorry, alignment people.”
This remark definitely points towards the closed-door systems that OpenAI, Google, and many others have been building in a bid to make them more aligned. Whatever the case may be, Meta’s bid towards open sourcing one of the most “dooming” technologies of all time, as many put it, is a little concerning.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Is open source really that good?
Sam Altman has been asked several times about the “kill switch” that he keeps carrying in his blue backpack to put an end to OpenAI’s systems if they get rogue. In a recent interview, he laughed it off saying it was just a joke. But this still does not kill the fear that people have about AI systems getting out of control.
With AI systems getting better and better everyday, the conversation about OpenAI’s system going rogue has steered away. It has come to open source models. Meta wants to open source a GPT-5 level model. This means, as pointed out in an X discussion, that there would be no kill-switch for this, which means that if some bad actor wants to use an open source model and weaponise it, there is no way to turn it off.
Moreover, all the research for AI safety can possibly become meaningless. Companies that have been trying to make AI systems aligned, honest, and ethical would have no say in what happens. Everyone would be able to fine-tune the open source models however they want to. Arguably, this might be a little more dangerous than just giving away your data to OpenAI through ChatGPT.
Meta’s love for open source isn’t quite clearly well founded on its own beliefs. There is no proof that the company had plans to open source LLaMA in the first place, it only happened when the model got leaked, and was hailed as a game changer by many developers.
Open source is still under control
Yann LeCun, the Meta AI chief, posted on X that, “Once AI systems become more intelligent than humans, humans will still be the apex species.” AI doomers still disagree with this statement. But even if humans remain the “apex species”, one thing is certain that OpenAI’s GPT is going to remain the “apex model” for a very long time. And OpenAI knows that.
The hype around open source models outperforming closed door ones would be nothing if every single model was not compared against GPT-3 and GPT-4.
Every model in the open source ecosystem compares itself against GPT’s capabilities, and on the HumanEval benchmark which has been created by OpenAI itself. Arguably, no one would even bat an eyelid on the open source models if they were not compared against GPT-3 and GPT-4 in terms of performance.
Furthermore, even if Meta decides to release an open source Llama 3, which would be on par with GPT-4 in terms of capabilities, it would still be measured on HumanEval. Additionally, by the time this happens, OpenAI might have already released GPT-5 and created another evaluation benchmark for open source models. There is no way for open source to escape this.
Adding to all of this is the fact that OpenAI, partnered with Anthropic, Google, and Microsoft, launched Frontier Model Forum, for ensuring safe and responsible development of AI models. So if Llama and further models go rogue in the future, they can be pulled down from Hugging Face and GitHub in a moment.
In May, Meta was not invited to the White House for AI discussion and it is not part of this forum as well. The company is being left behind, maybe voluntarily, and is trying to build an open source league of its own, which interestingly is still being controlled by OpenAI and others. So, Meta’s bid to be the good guy of AI through the open source community may not last for too long.