Listen to this story
|
Gone are the days when AI was a mere fantasy from the pages of science fiction novels. The future is here and spearheading this revolution is none other than OpenAI, whose cutting-edge large language models—GPT-4—are paving the way for unparalleled innovation. Not one to be left behind, Google has also jumped on the bandwagon with its latest chatbot sensation, ‘Bard’, which is also powered by LaMDA.
This has brought in a lot of attention and concerns about the capabilities of LLMs. Now, AI experts like Gary Marcus, Yoshua Bengio, Grady Booch, Emad Mostaque, along with Elon Musk and a thousand other experts have called for a temporary pause on training and building systems exceeding GPT-4. Looks like OpenAI is standing against all the experts of AI.
This shows that despite all the hype surrounding these technological marvels, most experts remain sceptical and even scared of their true capabilities. Or, maybe everyone is just trying to stall OpenAI’s efforts to go ahead in the AI race. Either way, at this point in time, it won’t be wrong to call these models ‘la la models’ or ‘lame language models’—you decide.
Sam Altman, CEO of OpenAI, acknowledges the limitations of these LLM models. After calling ChatGPT a “horrible product” a few weeks ago, in a recent podcast with Lex Fridman, he said that GPT-4 is just a very good “next word predictor”. But even then, they believe that LLMs—and more specifically text—is the projection of the world, as said by Ilya Sutskever, co-founder and chief scientist at OpenAI.
In the same podcast, Altman discusses how the future is about multiple AGIs and not a single one. LLM-based AI is probably going to be one of them. Moreover, Altman said that LLMs with RLHF is the perfect way for AGI. OpenAI clearly knows what they are doing and there is no one who can deny that.
Why Do Experts Not Like LLMs
The biggest critic of LLMs is Meta AI Chief Yann LeCun, who has been on a rant about how these autoregressive LLMs that just predict the next word will never pave the way towards AGI, has not agreed to signing the petition to pause training beyond GPT-4. Although he has been one of the greatest proponents of self-supervised learning and a keen critic of reinforcement learning—which evidently makes him less trusting of LLMs—LeCun disagrees with the premise of this petition.
Nope.
— Yann LeCun (@ylecun) March 29, 2023
I did not sign this letter.
I disagree with its premise. https://t.co/DoXwIZDcOx
In a recent tweet, LeCun gave credit to reinforcement learning with human feedback (RLHF), the technique behind OpenAI’s ChatGPT, saying that even though it might reduce the error accumulation that these auto-regressive LLMs produce, but still cannot eliminate the problem entirely.
While he holds contrasting views on AGI when compared to LeCun, Gary Marcus is another passionate LLM basher who agrees that LLMs are far from AGI but can prove to be dangerous in themselves is leading this petition.
There is a fallacy here, in thinking that LLMs (which aren’t AGI) can’t cause serious harm because they aren’t AGI.
— Gary Marcus (@GaryMarcus) March 27, 2023
To the contrary, they could cause serious harm, despite not being AGI, precisely because they are so unreliable and so intractable. https://t.co/GKeeo0zQ7o
Most recently, Geoffery Hinton also weighed in his thoughts about general purpose AI. He said that though he once subscribed to the idea that LLMs will lead the way, he does not anymore. But when it comes to AI wiping out humanity, “It is not inconceivable,” said Hinton.
We can already see these implications in the hallucinations that these models make. First was Meta’s BlenderBot, then ChatGPT, then BingChat, and now Bard. Everyday, users are finding out new ways in which these models spew out gibberish and then start criticising the concept of these models. OpenAI in their “lame” paper about how GPT technology is going to impact jobs gave out more reasons to believe that this technology is here to stay, and definitely not something to be taken lightly.
LLMs get a lot of hate for absolutely no reason at all
If this LLM technology is so lame or risky, why are all the companies rushing towards building something from LLMs and competing against OpenAI? Almost every big tech now has its own LLM, except for Apple. Interestingly, Steve Wozniak, co-founder of Apple, has also signed the petition to impede the progress of GPT-4. If everyone believes that LLMs are an off-ramp on the road to AGI, why is it on the forefront of AI development and innovation?

The reality remains that big tech companies want to stay on top. Apart from the researchers who actually want to build something towards AGI, Google, Microsoft, or any other company probably only cares about building chatbots because that is the money-minting trend of late.
People who are criticising, or merely being sceptical about LLMs, do not matter to the big tech circle. If the researchers can build a product that has an actual use case, in semblance with the current LLM-based chatbots, then the criticism would probably make sense to big tech.
It Is Just About OpenAI
Bill Gates, even though he does not explicitly talk about LLMs, said in his recent blog post that the age of AI has begun. In an article praising OpenAI and GPT—which was mostly fluff—Gates makes the case that a lot of problems in the world will get solved with the help of the technologies that Microsoft is investing in.
This fluffy article was an open invitation for Elon Musk to take a dig at Gates’ assertion. He said that Gates’ understanding of AI has always been limited.
Elon Musk just called out Bill Gates for being a moron who doesn’t even understand the things he is working on 🤣
— Matt Wallace (@MattWallace888) March 27, 2023
Though Musk hasn’t openly called OpenAI or any LLM technology bad, he has been criticising the “woke” approach the company is taking for AI, one he founded alongside Altman. Now, with him signing the petition to stop OpenAI, this appears like a simple competitive move to get ahead.
Interestingly, according to reports, Musk recently roped in DeepMind researcher Igor Babuschkin to work on a rival to ChatGPT. Musk’s bid to make a rival to OpenAI’s woke bot is still going to rely on LLMs. So as it stands, no loss for LLMs at the moment but the war against OpenAI rages on.
In Fridman’s podcast, Altman said that he believes that representing the world in text and building models using it is the road towards AGI. This claim has brought in a lot of threat for companies that have been building towards AGI all this while. “How can a new startup like OpenAI head towards AGI before us?”