What makes Gary Marcus angry?

In an exclusive and truly revelatory interaction with Analytics India Magazine, Gary Marcus expressed his critique on how AI is (not) progressing as the world thinks it is.
Listen to this story

“Current AI is illiterate. It can fake its way through, but it doesn’t understand what it reads”

Gary Marcus in an interview with Analytics India Magazine

The illustrious Professor of Psychology and Neural Science at NYU is a popular critic of deep learning and AGI. Gary has taken to debates with Yann LeCun and Yoshua Bengio, written articles and published books to critique the current approach to deep learning.

Back in the early 1990s, Gary and Pinker had published a paper asserting that neural networks couldn’t even learn the language skills as a little child could. In 2012, Gary published an article in The New Yorker titled, “Is Deep Learning a revolution in AI?” stating that techniques leveraged by Geoff Hinton were not powerful enough to understand the basics of natural language, much less duplicate human thought. 

“Hinton has built a better ladder, but a better ladder doesn’t necessarily get you to the moon,” he wrote. In 2022, he still thinks it holds. Gary spoke about the track we are on versus the track we should be on for AGI. “The specific track we are on is large language models, an extension of big data. My view about those is not optimistic. They are less astonishing in their ability not to be toxic, tell the truth, or be reliable. I don’t think we want to build a general intelligence that is unreliable, misinforms people, and is potentially dangerous. For instance, you have GPT-3 recommending that people commit suicide.

There’s been enormous progress in machine translation, but not in machine comprehension. Moral reasoning is nowhere, and I don’t think AI is a healthy field right now. People are not acknowledging the limits. DALL-E looks to progress in some ways because it makes these very pretty images. Still, in other ways, it’s not progressing at all. It hasn’t solved the problem of languages. It recognises some parts of what you say, but it does not recognise their relationships. This problem is not going to magically go away. We have maybe a billion times data today, but these basic problems around compositionality have no solution yet. So AI is not reliable,” he said. 

Gary Marcus

Arguing for Nativism

In philosophy, empiricism is the view that all concepts originate in experience, and one learns just from experiences. Artificial Intelligence is built on this very foundation, and therefore models are trained on tons of data. 

Gary holds the opposing position that argues for innate knowledge. “If you look at the data for humans and other animals, we are born knowing something about the world. And unfortunately, most computer scientists aren’t trained in developmental psychology,” he said. In a 2017 debate with Yann LeCun, Marcus argued that deep learning was not capable of much more than simple tasks of perception.

“If neural networks have taught us anything, it is that pure empiricism has its limits,” he said in the debate. He further discussed the drawbacks of the empiricist approach. “Large networks don’t have built-in representations of time, only a marginal representation of space and no representation of an object. Fundamentally, language is about relating sentences that you hear, and systems like GPT-3 never do that. You give them all the data in the world, and they are still not deriving the notion that language is about semantics. They’re doing an illusion. They can’t recognise the irony, sarcasm, or contradiction. I see these systems as a test of the empiricist hypothesis, and it is a failure.”

What about Technological singularity

The idea of technological singularity is that ordinary humans will someday be overtaken by artificially intelligent machines or cognitively enhanced biological intelligence. The discussion has moved from the realm of science fiction to serious debate. But Gary believes there is “no magic moment of singularity”. This is because intelligence is not singular, but instead different facets, somewhere it is better than humans and somewhere not. 

“In raw computational power, machines exceed people. There’s no question that a computer is better than the average person when analysing positions on a chessboard. But, the ability of an eight-year-old to watch a Pixar movie and understand what’s happening far exceeds any machine,” Gary said. “Current AI is illiterate. It can fake its way through, but it doesn’t understand what it reads. So the idea that all of those things will change on one day and on that magical day, machines will be smarter than people—is a gross oversimplification.”

Is the Turing Test reliable?

The Turing test, developed in 1950 by Alan Turing, the founding father of AI, is one of the most classic tests to measure AI growth. It is played in the form of a game between two humans and a computer, with the end goal that the computer has to fool the humans into thinking the machine is also just as human. But when it comes to measuring AI today, Gary says, “Turing Test is not very good. Humans are gullible, and machines can be evasive. If you want to build a system that fools people, you don’t answer some of those questions,” he said. 

“Eugene Gustman was a system that won a small version of the contest for a few minutes by pretending it didn’t speak English. It pretended to be a 13-year-old boy from Odessa whose English wasn’t perfect. It would respond with wisecracks to evade revealing its limitations, and to the untrained eye, it was fairly convincing. A professional could still recognise limits. All that tells us is that human beings think that machines that can talk are intelligent, but it turned out to be untrue. 

This is what’s happening with GPT-3. It sometimes says smart things, but there’s no long term conception of what you’re talking about. It doesn’t remember answers from one minute to the next; there is no consistency, there’s no real intelligence. Humans aren’t great at making tests, and machines pick up on that and wind up passing the test. But it doesn’t mean that they have the comprehension of what we’re ultimately after.” 

Facing criticism online and Gary’s beef with Yann LeCun

Marcus leaves no stone unturned to flaunt his ferocity in calling out the “celebrities” of the AI community. This can be seen in his debates with AI pioneers Yann Lecun in 2017 and Yoshua Bengio in 2019. In fact, in response to his 2018 critique of deep learning, LeCun said, ‘the number of valuable recommendations ever made by Gary Marcus is exactly zero.’ While he made this comment online, LeCun had agreed with Gary’s statement about the need for innate machinery to the NYU audience.

LeCun kind of bullied me,

Gary said

“I said there were ten problems, and he said they were mostly wrong. He didn’t write a long critique, he just said I am mostly wrong, and nobody has ever shown they were incorrect. And it was like people following a political leader; everybody on Twitter jumped on me. Having a famous person tells you that you’re wrong doesn’t mean you’re wrong.” 

“I wrote a paper saying deep learning is hitting a wall, and people started making cartoons about deep learning stepping over the wall. None of them seemed to have read the intellectual content – which is not that you can’t do anything with deep learning but that you’re going to have to use it in association with other systems because there are these particular weaknesses.” 

Large language models are the wrong technology for responsible AI

Responsible and ethical AI is one of the key concerns of the technology sector today. Some key instances of this are GPT-3 asking a person to commit suicide or Delphi stating genocide would be okay if everybody is happy with it. “Large language models are not the right technology for responsible and ethical AI,” said Gary. “They are very good at capturing statistical association, but they’re not good at being responsible and ethical. As long as most of the investment is in that, we have a problem.

There’s nothing wrong with having large quantities of data, and other things being equal, large quantities are better than small. But we need systems that can take explicit English representations and reason according to them. We need a technology where a system can propose an action, let’s say, reply to a user, and evaluate – might this cause harm? We have to have that, and we don’t. We’re not even close. No system can read a conversation. The best we have are content filters that look for hateful language. But they’re very naive in terms of how they evaluate hateful speech or misinformation.” 

While Gary was unsure of the direction AI development would proceed in, he did point out reliance on historical data or inclusion of only AI researchers’ ethics as two of the key factors making AI less responsible. He additionally spoke about the hybrid approach to building AI models, which combines classical and deep learning systems. Within hybrid approaches, the neurosymbolic approach is what little we do know about the technique yet. Neurosymbolic AI would include neural networks and symbolic systems.

“You must be able to represent things abstractly and symbolically. I just don’t see how to get to AI without at least doing that,” he said. 

Download our Mobile App

Avi Gopani
Avi Gopani is a technology journalist that seeks to analyse industry trends and developments from an interdisciplinary perspective at Analytics India Magazine. Her articles chronicle cultural, political and social stories that are curated with a focus on the evolving technologies of artificial intelligence and data analytics.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

6 IDEs Built for Rust

Rust IDEs aid efficient code development by offering features like code completion, syntax highlighting, linting, debugging tools, and code refactoring

Can OpenAI Save SoftBank? 

After a tumultuous investment spree with significant losses, will SoftBank’s plans to invest in OpenAI and other AI companies provide the boost it needs?

Oracle’s Grand Multicloud Gamble

“Cloud Should be Open,” says Larry at Oracle CloudWorld 2023, Las Vegas, recollecting his discussions with Microsoft chief Satya Nadella last week.