Yann LeCun Thrashes GPT-3 — Is The Hype Real?

Yann LeCun Thrashes GPT-3 — Is The Hype Real?

With its massive buzz, GPT-3 is a language model that has created a divide among AI leaders. While some consider it to be a revolutionary innovation, there are a few experts who are massively concerned about its immense potential. Adding on to that, Yann LeCun, the VP & Chief AI Scientist at Facebook has recently made a huge statement on a Facebook post thrashing OpenAI’s massive language model.

By stating a few realities, Yann LeCun wrote a brief essay on its capabilities and the hype that has been created around it.

He based his arguments on a recent explanatory study done by Nabla, where the company debunked some of the significant expectations that people have built around this massive language model. The study noted that although “… some have claimed that algorithms now outperform doctors on certain tasks and others have even announced that robots will soon receive medical degrees of their own.” This is all far-fetched dreams with GPT-3.


Sign up for your weekly dose of what's up in emerging technology.

You can read their article by Nabla here.

According to LeCun, “… trying to build intelligent machines by scaling up language models is like building high-altitude aeroplanes to go to the moon.” He believes, with that, one might beat altitude records, but going to the moon will require a completely different approach, altogether.

The ‘godfather of AI’ —Yann LeCun has, in short, thrashed those people who have high expectations from GPT-3. He said, “Some people have completely unrealistic expectations about what large-scale language models such as GPT-3 can do.”

He, though, believes that the language model is entertaining, and also creative,” it cannot yet replace humans in certain tasks. The same thing has been noted by the Nabla report, after testing GPT-3 in a variety of medical scenarios, it was found that “there’s a huge difference between GPT-3 being able to form coherent sentences and actually being useful.”

As a matter of fact, Nabla noted that in one of the testing cases, the model was unable to do a simple addition of the cost of items in a medical bill. In another case, during a chatbot situation, the GPT-3 actually mocked the user to kill himself.

More Great AIM Stories

Sejuti Das
Sejuti currently works as Associate Editor at Analytics India Magazine (AIM). Reach out at sejuti.das@analyticsindiamag.com

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

Conference, in-person (Bangalore)
Cypher 2023
20-22nd Sep, 2023

3 Ways to Join our Community

Whatsapp group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our newsletter

Get the latest updates from AIM