MITB Banner

Superintelligence Timeline Now Shrinks to 20 Years

Elon Musk and Jensen Huang think it is happening somewhere between a year and five.

Share

Listen to this story

One of the godfathers of AI, Geoffrey Hinton, recently altered his estimation of when superintelligence could come into being.

“I think it’s fairly clear that maybe in the next 20 years, I’d say with a probability of 0.5, it will get smarter than us and probably in the next hundred years, it will be much smarter than us,” Hinton said during the Romanes Lecture he gave recently on whether digital intelligence will replace biological intelligence.

Hinton had famously changed his estimation early last year, believing AI will become smarter than humans, shortly before he resigned from Google

While Hinton initially changed his conservative estimate of 30-50 years to a more dire 5-20 years, it seems this has changed slightly over the months due to the accelerated AI advancements.

Hinton isn’t the only industry stalwart to predict when superintelligence will become a thing.

From Elon Musk’s optimistic, yet unlikely prediction of a year to Yann LeCun’s vague estimate of “years, if not decades”, to Jensen Huang’s straightforward and popular estimation of five years, it seems that AI superseding human intelligence will forever be around the corner.

No, AI is not going to take over just yet

Both Hinton and fellow AI godfather LeCun give ample reasons why superintelligence is not likely anytime soon. Hinton acknowledged that these things are hard to predict since the technology is new.

Ever the optimist, Hinton said, “My conclusion, which I don’t really like, is that digital computation requires a lot of energy, and so it would never evolve. We have to evolve using the quirks of the hardware to be very low-energy.

But once you’ve got it, it’s very easy for agents to share – GPT4 has thousands of times more knowledge in about 2% of the weights, so that’s quite depressing. Biological computation is great for evolving because it requires very little energy, but my conclusion is that digital computation is just better.”

However, while Hinton is understandably cautious, LeCun has been much more standoffish to the idea that AI regulation is needed just in case the human race gets taken over.

LeCun has repeatedly stated that the move towards superintelligence is decades away, with no specific breakthrough defining when we will definitively see a shift. And this might be the reason that superintelligence will always be at least one step away from where we think it is.

In not being able to pinpoint an exact moment where superintelligence comes into existence, the ability to actually predict it becomes moot.

Maybe predictions aren’t really the way to go about it, considering every new breakthrough seems to fast forward the timeline, followed by ultimately pushing it back, thanks to the new challenges they subsequently pose.

How far are we, really? 

The general consensus is that we are nowhere near where we need to be for superintelligence to come into being. The leap of AI as it is now to where it needs to be has several breakthroughs in between for it to rival the intelligence of a human mind.

The godfathers and other relatives of AI seem to agree on one thing. The level of computational power needed for a potential superintelligence is extraordinary, so that could be one starting point on what to look for.

Hinton said that the reason for this was that AI might already be almost on par with the human brain, at least in the way it works, rather than its intelligence.

“I thought making our models more like the brain would make them better. I thought the brain was a whole lot better than the AI we had… I suddenly came to believe that maybe the digital models we’ve got now are already very close to or as good as brains, and will get to be much better than brains,” Hinton said.

He explained that the ability to run the same neural net on different computers or pieces of hardware far outweigh the learning ability of biological computation, with digital computation having the scope for becoming smarter much faster. 

However, as mentioned before, biological computation needs much less energy.

Unless this gap is closed, super intelligence isn’t really something that is within grasp. However, while many believe that quantum computation is the means to this end, LeCun seems to believe that quantum computation is just as much of a pipe dream as superintelligence.

With China recently announcing the development of the Taichi chiplet (advertised as capable of running an AGI model!), it’s only a matter of time to see if this theory will hold up. 

Share
Picture of Donna Eva

Donna Eva

Donna is a technology journalist at AIM, hoping to explore AI and its implications in local communities, as well as its intersections with the space, defence, education and civil sectors.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.