Angels & Demons of AI

In an exclusive interaction with AIM, Yann LeCun elaborated on his latest research and critiqued the chain of citations.
Listen to this story

The AI research community has made significant strides over the last few years, even as more is still being done. But, oftentimes, some of the work gets buried in the sand. Jürgen Schmidhuber seems to be overlooked. In July this year, he accused Yann LeCun of rehashing old ideas and presenting them without credit, saying—“We must stop crediting the wrong people for inventions made by others.”

In an exclusive interaction with Analytics India Magazine, Yann LeCun said: “I’ve known Jürgen since he was a student. He has an unusual idea about how credit should be attributed. It seems like he has the idea that if someone has the germ of an idea, they should get all the credit for what comes after. For most people, that’s not how things work.”

He believes that having an idea and experimenting with toy problems is not enough. “One can use these ideas at scale to make them work on real problems, which usually attracts attention. Then, one can deploy them in products. So, there is a whole chain of contributions,” added LeCun. 

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

“Sometimes, what’s hard is actually to instantiate those ideas into things that work. That’s where the difficult part starts very often. I can write ‘f’ of ‘x’ equals zero. Absolutely every theoretical statement in all of science can be reduced to this data. If you know what ‘f’ and ‘x’ mean, you might have a general idea, but then you need to contribute something concrete and operationalise this idea. He’s not the only one to have this attitude, but he’s a bit extreme,” quipped LeCun.




The drama unfolds 

Schmidhuber started a whole dispute about recognition but that wasn’t a solitary instance. Earlier, news of the 2019 Honda Prize being awarded to Geoffrey Hinton “for his pioneering research in the field of deep learning in artificial intelligence (AI)” was expected to prompt the machine learning community to toast the man. Instead, the gloves came off, and an unexpected Internet dust-up ensued. Reason being: Schmidhuber’s 6,300-word blog post.

At NIPS (now NeurIPS) in 2016, Schmidhuber interrupted a widely-attended workshop on GANs (Generative Adversarial Networks) by Ian Goodfellow. He claimed that he did very similar work earlier, which was overlooked. This caused outrage in the media, massive discussions on Reddit and Twitter, and certain famous people exchanged certain serious words. However, the community generally sided with Goodfellow. Simply because Schmidhuber “had the idea” doesn’t mean he is owed the credit for other peoples’ work decades later.

More to the story 

The tussle with LeCun was the research paper ‘A Path Towards Autonomous Machine Intelligence,’ which was published earlier this year.

Talking about the research paper, LeCun said that it focuses on integrating systems and understanding how the world works into a cognitive architecture. An architecture that allows machines not just to recognise, but also reason, in a hierarchical fashion. “Every human and a lot of animals can plan complex tasks by decomposing them into simpler ones. However, that seems to be something machines do not know how to do,” explained LeCun. 

Further, he said, planning ability was a significant topic of interest in classical AI in the 1970s and 80s, but in the current brands of machine learning-based AI, there’s very little planning most of the time, except in certain types of AI systems for games like Alpha Go or chess where you need to plan. But, even with the latest system like Meta’s CICERO, there is quite a bit of planning there, but it’s not hierarchical planning.

Hierarchical planning is a problem-solving approach that focuses on decomposition, where problems are broken down step-wise into smaller ones until it is solved. “Humans do this all the time; we just don’t realise it. Being able to plan requires the ability to predict what is going to happen as a consequence of your actions. But AI can’t predict all the details. So, learning how the world can be affected by actions you take is an essential component of intelligence,” concluded LeCun. 

Tasmia Ansari
Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.

Download our Mobile App

MachineHack

AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Strengthen Critical AI Skills with Trusted Corporate AI Training

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR