MITB Banner

Yann LeCun sparks a debate on AGI vs human-level AI

Yann LeCun claimed the word AGI should be retired and must be replaced with “human-level AI”.

Share

Artificial Intelligence research took a serious turn in the mid-1950s. Herbert A Simon, who won the Nobel Prize in 1978 for Economic Sciences, said human beings, seen as behaving systems, are simple. The complexity in our behaviour over time is largely a reflection of the complexity we find in our environment.

His views later served as an inspiration for Arthur C Clarke’s fictional antagonist HAL 9000 from the Space Odyssey series. Clarke believed AGI could be achieved by 2001. Marvin Minsky, who discovered the confocal scanning microscope in 1955, collaborated with Clarke to bring HAL 9000 to life in Kubrick’s 2001: A space odyssey. The immense potential of AGI is explored in movies like Her, Blade Runner 2049, and Transcendence. 

In the early 70s, researchers  woke up to the difficulty of achieving AGI: Funding dried up, precipitating a long AI winter. However, during the 80s, Japan’s Fifth Generation Computer Project rekindled the interest in AGI. The industry and government started pumping money to drive AGI efforts. Then, confidence in AI plummeted in the late 80s, and the vision for the Fifth Generation Computer Project fell by the wayside.

AGI & human intelligence

In 2005, Ray Kurzweil used the term “narrow AI” to define systems showcasing specific “intelligent” behaviour in a given context. In narrow AI, if the context or specification is changed, some level of human reprogramming is required for the system to regain its intelligence. However, it is different from inherently intelligent systems like humans with the ability to self-adapt.

Yann LeCun’s recent LinkedIn post added fuel to this debate: He claimed the word AGI should be retired and must be replaced with “human-level AI”. He said there is no such thing called AGI. LeCun’s opinion on intelligence or understanding depends on an efficient representation of data with predictive power. Any intelligent system will only be able to understand a small part of its universe, he said.

Will Chambers, technical director at Blue Origin, said calling AGI a human-level AI breeds ambiguity since we haven’t yet fully understood how our brains function.

Earlier, Turing Award winners Yoshua Bengio and Yann LeCun claimed self-supervised learning can help to create human-like AI. Supervised learning refers to training an AI model on a labelled data set. Rather than depending on annotations, self-supervised learning algorithms generate data by revealing relationships between the data’s parts.

LeCun implies that self-supervised learning and learnings from neurobiology will not be enough to achieve AGI or the intelligence of a machine with the ability to learn any task. It is because intelligence, even human intelligence, is very specialised.

It is difficult to categorically place systems in a hierarchy according to its intelligence wherein arbitrary intelligence can be associated and compared with that of human-level intelligence. Many researchers have worked to provide universal intelligence measures. But the utility of these measures are still contentious.

The implicit assumption is humans are inherently intelligent systems and the most pragmatic way to characterise general intelligence is by comparing it with human capabilities.

Nils Nilsson (a pioneering AI researcher) said real human-level AI is achieved when tasks humans perform for profit can be automated. Rather than working towards automation by building different systems, he argued for a general-purpose system to perform human jobs: A system with minimal but extensive, built-in capabilities to improve through learning. While the Turing test brings a different point of view on emulating humans, Nilsson’s interests did not lie in whether an AI system can fool people into thinking it’s a human, but rather if it can do the important practical things most people do.

Why people are against AGI

Human beings are considered the more “developed” and “smarter” among the species inhabiting earth. If AI can supersede humanity in general intelligence, then such “superintelligent” machines can potentially control humans. 

However, controlling a superintelligent machine, or infusing it with human values, is easier said than done. Many researchers believe a superintelligent machine will naturally resist attempts to be shut down or change its goals (known as instrumental convergence).

The AI system’s robust “learning” capabilities will help it to “evolve into a system with casual behaviour”, without the stress of unanticipated external scenarios. It can botch an attempt to design a new generation of itself and create a successor AI more powerful than itself. 

The only way a self-improving AI can be safe is by creating bug-free successor systems. But what if the machines can predict humans will shut it off, and it uses its superintelligence to thwart such efforts– a ‘treacherous turn’ as it’s dubbed.

Meanwhile, reprogramming superintelligence with human values is a difficult technical task. Yann LeCun argues superintelligent machines will have no desire for self-preservation.

The concerns about superintelligence were fuelled after Stephen Hawking, Bill Gates, and Elon Musk called it a credible threat to humanity.

In 2000, Bill Joy, Sun Microsystems co-founder and computer scientist, wrote an essay on ‘Why The Future Doesn’t Need Us’, which called superintelligent robots a high-tech danger to human survival, along with nanotechnology and engineered bioplagues.

“The outcome is simply a question of time, but time will come when the machines will hold supremacy over the world and its people is what no person of a truly philosophic mind can for a moment question,” Samuel Butler wrote in his 1863 essay ‘Darwin among the Machines’.

In 1951, Alan Turing’s article titled ‘Intelligent Machinery, A Heretical Theory’, proposed that AGI would likely “take control” of the world as they become more intelligent.

“Almost any technology has the potential to cause harm in the wrong hands, but with AI and robotics, we have the new problem that the wrong hands might belong to the technology itself. Countless science fiction stories have warned about robots or robots–human cyborgs running amok,” said Stuart J. Russell and Peter Norvig’s textbook ‘Artificial Intelligence: A Modern Approach’

Share
Picture of Akashdeep Arul

Akashdeep Arul

Akashdeep Arul is a technology journalist who seeks to analyze the advancements and developments in technology that affect our everyday lives. His articles primarily focus upon the business, cultural, social and entertainment side of the technology sector.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.