Yann LeCun Thrashes GPT-3 — Is The Hype Real?

Yann LeCun Thrashes GPT-3 — Is The Hype Real?

With its massive buzz, GPT-3 is a language model that has created a divide among AI leaders. While some consider it to be a revolutionary innovation, there are a few experts who are massively concerned about its immense potential. Adding on to that, Yann LeCun, the VP & Chief AI Scientist at Facebook has recently made a huge statement on a Facebook post thrashing OpenAI’s massive language model.

By stating a few realities, Yann LeCun wrote a brief essay on its capabilities and the hype that has been created around it.

He based his arguments on a recent explanatory study done by Nabla, where the company debunked some of the significant expectations that people have built around this massive language model. The study noted that although “… some have claimed that algorithms now outperform doctors on certain tasks and others have even announced that robots will soon receive medical degrees of their own.” This is all far-fetched dreams with GPT-3.

You can read their article by Nabla here.

According to LeCun, “… trying to build intelligent machines by scaling up language models is like building high-altitude aeroplanes to go to the moon.” He believes, with that, one might beat altitude records, but going to the moon will require a completely different approach, altogether.

The ‘godfather of AI’ —Yann LeCun has, in short, thrashed those people who have high expectations from GPT-3. He said, “Some people have completely unrealistic expectations about what large-scale language models such as GPT-3 can do.”

He, though, believes that the language model is entertaining, and also creative,” it cannot yet replace humans in certain tasks. The same thing has been noted by the Nabla report, after testing GPT-3 in a variety of medical scenarios, it was found that “there’s a huge difference between GPT-3 being able to form coherent sentences and actually being useful.”

As a matter of fact, Nabla noted that in one of the testing cases, the model was unable to do a simple addition of the cost of items in a medical bill. In another case, during a chatbot situation, the GPT-3 actually mocked the user to kill himself.

Download our Mobile App

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox