MITB Banner

Meta is Where OpenAI was Four Years Ago

“I think this is how we will achieve more aligned models, by making them open source and letting people work on them,” said Mark Zuckerberg. Same was the case with OpenAI with GPT-2

Share

Listen to this story

Meta thinks it is leading the open-source LLM race. In reality, Meta is where OpenAI was four years ago with its open source GPT-2. Now with LLaMa, Meta wants to be altruistic by giving its technology to the open source, but the truth is, it is only because LLaMa isn’t where GPT-4 or PaLM are right now.

In the recent podcast with Lex Fridman, Mark Zuckerberg said, “The stage we are in right now, the equities balance strongly in my view towards doing this more openly.” He believes that if companies think that they have gotten close to what they believe is “superintelligence”, it makes sense for them to discuss and think through a lot more. 

Zuckerberg knows that even with the current capabilities of LLaMa, their open source large language model, it is still “an order of magnitude smaller than what OpenAI and Google are doing”. This explains why Meta wasn’t invited to the White House meeting with the President. The reason stated was that the meeting was “focused on companies currently leading in the space“. Zuckerberg’s statement in the podcast confirms that he is aware of this.

Meta-morphosis 

“The biggest OpenAI (GPT-4) or Google (PaLM-2) models do not reveal what the exact specs of these models are. They can be ten times bigger than their previous versions. The whole debate is if it is good for everyone in the world to access these frontier AI models,” said Zuckerberg. Interestingly, he admitted that Meta’s LLaMa is not a frontier AI model right now. 

Looks like all the Jiu-Jitsu training is finally rubbing off on him. “I think it’s a big gift [Jiu-Jitsu] of somehow being humbled, especially physically, opens your mind to the full process of learning, what it means to learn, which is being willing to suck,” narrated Fridman, describing Zuckerberg’s personal transformation.  

However, Zuckerberg claimed that LLaMa, with its biggest 65 billion parameters is proving to be as performing as any other larger model. “We can do it for very few computers and it’s just way more efficient, saves a lot of money for everyone who uses this,” said Zuckerberg. He acknowledged that the relatively earlier releases of LLaMa isn’t quite as the frontier of models like OpenAI or Google’s, and is nowhere near superintelligence

While Zuckerberg did not confirm if the second, more capable version of LLaMa is expected to release any time soon, he believes that even the current capabilities of the model is the testament of what Meta wants to pursue. But when asked if the next version is as great OpenAI or Google, would Zuckerberg open source it, he said that there are a lot of technologies that Meta hasn’t open sourced. 

Meta is bound to follow OpenAI’s path, eventually 

With just 1.5 billion parameters, GPT-2 was ahead of everyone else in the space, which was kind of empty anyway. But now, Meta has found its answer – open source and keeping it only for research. But it will probably not be this way once the capabilities of LLaMa increase, eventually becoming like the closed-door OpenAI, as was evident in the podcast.

“We want a lot more researchers working on it and for a lot of these reasons open source has a lot of advantages.” He explained how open source software is more secure because there are a lot of people criticising it, finding holes in it, and thus in the end making it safe. “I think this is how we will achieve more aligned models, by making it open source and letting people work on it,” said Zuckerberg.

Meta has been shying away from getting into the LLM space. A lot of it was because of past failures, and the risk of open sourcing software, as these views were expressed by Yann LeCun, the AI head of the company. But now, as Zuckerberg explained, it is making sense for the company to shift their focus on open sourcing technologies, but not for commercial, but only research purposes. 

According to Zuckerberg, Meta’s bet is on the open source community as they believe instead of hopping onto the wagon of claiming superintelligent AI, it would be beneficial to let the community use it for research purposes, and building more efficient and aligned AI.

Meta is following its true nature of being one of the biggest proponents of open source, but it might not stay this way too long, just how OpenAI changed in four years, and kept its “superintelligent” GPT-4 for itself.

Share
Picture of Mohit Pandey

Mohit Pandey

Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.