Listen to this story
Yann LeCun, VP & Chief AI Scientist at Meta, recently published a paper titled, ‘A Path Towards Autonomous Machine Intelligence’. However, the paper created a controversy as AI pioneer Jurgen Schmidhuber took to Twitter to claim that the paper does not cite the essential work done between 1990-2015.
LeCun is well known in the AI community, especially for winning the 2018 Turing Award with Yoshua Bengio and Geoffrey Hinton, for their work on deep learning.
“Much of the closely related work not acknowledged was done in my lab, and I naturally wish that it be acknowledged and recognised. I would like to start this by acknowledging that I am not without a conflict of interest here; my seeking to correct the record will naturally seem self-interested,” Jurgen Schmidhuber said.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Unpacking the paper
The paper describes a pathway towards developing intelligent machines that learn more like animals and humans, that can reason and plan, and whose behaviour is driven by intrinsic objectives rather than by hard-wired programs, external supervision, or external rewards.
Download our Mobile App
The paper outlines an architecture and training paradigms which combine concepts such as configurable predictive world model, behaviour driven through intrinsic motivation, and hierarchical joint embedding architectures trained with self-supervised learning.
In his blog post, Schmidhuber said much of the paper reads like a déjà vu of his papers since 1990, without citation. “Years ago, we had already published most of what LeCun calls his ‘main original contributions’.”
In his paper, LeCun points out three major challenges that AI research must resolve:
1. How can machines learn to represent the world, learn to predict, and learn to act largely by observation?
2. How can machines reason and plan in ways that are compatible with gradient-based learning?
3. How can machines learn to represent percepts and action plans in a hierarchical manner, at multiple levels of abstraction and multiple time scales?
Schmidhuber said all three questions posed by LeCun in his paper were already answered in papers published in 1990, 1991, 1997, and 2015.
He has claimed some of the concepts mentioned in LeCun’s paper were, in fact, published by him a long time ago.
Further, LeCun spoke about the Joint Embedding Predictive Architectures (JEPA) in his paper. JEPA can be seen as a combination of the Joint Embedding Architecture and the Latent-Variable Generative Architecture. He claimed JEPA will learn abstract representations that make the world predictable.
“That’s what we published in very general form for RL systems in 1997. See also earlier work on much less general supervised systems, e.g., ‘Discovering Predictable Classifications’ (1992),” Schmidhuber said.
Schmidhuber impugned LeCun’s paper in at least 10 different aspects.
“The present piece does not claim priority for any of them but presents a proposal for how to assemble them into a consistent whole,” LeCun’s paper stated.
Jurgen Schmidhuber’s most notable contribution is Long Short Term Memory used for a range of tasks, from speech recognition to machine translations. It was not the first time Schmidhuber levelled allegations against LeCun.
In 2018, when LeCun, Yoshua Bengio and Geoffrey Hinton were awarded the Turing Award, Schmidhuber accused the trio of circular citations.
“The trio might be backed by the best PR machines of the Western world (Google hired Hinton; Facebook hired LeCun). However, historic scientific facts will be stronger than any PR,” Schmidhuber said.
“The inventor of an important method should get credit for inventing it. If you ‘re-invent’ something that was already known, and only later become aware of this, you must at least make it clear later,” he added.