Listen to this story
Some twenty years ago, AI start-up Webmind introduced the idea of a digital baby brain– a digital mind that would manifest higher-level structures and dynamics of a human brain. Though physicist Mark Gubrud first used the term AGI in 1997, Webmind founder Ben Goertzel and DeepMind cofounder Shane Legg were instrumental in popularising the term.
Two decades later, we have AI tools likes GPT-3 producing human-like text and DALL.E creating incredible images from text inputs etc. Yet the AGI holy grail is still out of reach. So the million-dollar question is, are we on the right track?
Story so far
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
AGI is the north star of companies like OpenAI, DeepMind and AI2. While OpenAI’s mission is to be the first to build a machine with human-like reasoning abilities, DeepMind’s motto is to “solve intelligence.”
DeepMind’s AlphaGo is one of the biggest success stories in AI. In a six-day challenge in 2016, the computer programme defeated the world’s greatest Go player Lee Sedol. DeepMind’s latest model, Gato, is a multi-modal, multi-task, multi-embodiment generalist agent. Google’s 2021 model, GLaM, can perform tasks like open domain question answering, common-sense reading, in-context reading comprehension, the SuperGLUE tasks and natural language inference.
OpenAI’s DALL.E blew minds just a few months ago with imaginative renderings based on text inputs. Yet all these achievements pale in comparison with the intelligence of the human child.
The machines are yet to crack sensory perception, common-sense reasoning, motor skills, problem-solving or human-level creativity.
What is AGI?
Part of the problem is there is no one definition of AGI. Researchers can hardly agree on what it is or what techniques will get us there. In 1965, computer scientist IJ Good said: “The first ultra-intelligent machine is the last invention that man need ever make.” Oxford philosopher Nick Bostrom echoed the same idea in his groundbreaking work Superintelligence. “If researchers are able to develop Strong AI, the machine would require an intelligence equal to humans. It would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future,” said IBM. Many researchers believe such recursive self-improvement is the path to AGI.
“There’s tons of progress in AI, but that does not imply there’s any progress in AGI,” said Andrew Ng.
To solve AGI, researchers are creating multi-tasking and generalised AI. Take DeepMind’s Gato, for example. The AI model can play Atari, caption images, chat and manipulate a real robot arm.
“Current AI is illiterate,” said NYU professor Gary Marcus. “It can fake its way through, but it doesn’t understand what it reads. So the idea that all of those things will change on one day and on that magical day, machines will be smarter than people is a gross oversimplification.”
In a recent Facebook post, Yann LeCun said, “We still don’t have a learning paradigm that allows machines to learn how the world works like humans and many non-human babies do.” In other words, the road to AGI is rough.
Nando de Freitas, an AI scientist at DeepMind, tweeted, “the game is over” upon Gato’s release. He said scale and safety are now the challenges to achieving AGI. But not all researchers agree. For example, Gary Marcus said that while Gato was trained to do all the tasks it can perform, it wouldn’t be able to analyse and solve that problem logically when faced with a new challenge. He called them parlour tricks, and in the past, he’s called them illusions to fool humans. “You give them all the data in the world, and they are still not deriving the notion that language is about semantics. They’re doing an illusion,” he said.
Oliver Lemon at Heriot-Watt University in Edinburgh, UK, said the bold claims of AI achievements are untrue. While these models can do impressive things, the examples are ‘cherry-picked’. The same can be said for OpenAI’s DALL-E, he added.
Large language models
Large language models are complex neural nets trained on a huge text corpus. For instance, GPT -3 was trained on 700 gigabytes of data Google, Meta, DeepMind, and AI2 have their own language models.
Undoubtedly, GPT-3 was a game-changer. However, how closer can LLMs take us to AGI. Marcus, a nativist and an AGI sceptic, argues for the approach of innate learning over machine learning. He believes not all views originate from experience. “Large networks don’t have built-in representations of time,” said Marcus. “Fundamentally, language is about relating sentences that you hear, and systems like GPT-3 never do that.”
LLMs lack common-sense knowledge about the world, then how can humans rely on it? Melanie Mitchell, a Scientist at Santa Fe Institute, wrote in a column, “The crux of the problem, in my view, is that understanding language requires understanding the world, and a machine exposed only to language cannot gain such an understanding.”
Further, since these models are trained on tons of historical data, they show signs of bias, racism, sexism and discrimination. “We’d like machines to actually be able to reason about these things and even tell us your moral values aren’t consistent,” Gary said.
Where is AGI?
A few months ago, Elon Musk told the New York Times that superhuman AI is less than five years away. Jerome Pesenti, VP of AI at Meta, countered: “Elon Musk has no idea what he is talking about. There is no such thing as AGI, and we are nowhere near matching human intelligence.”
Musk’s classic riposte was: “Facebook sucks.”
“Let’s cut out the AGI nonsense and spend more time on the urgent problems,” said Andrew Ng. AI is making huge strides in different walks of life: AlphaFold predicts the structure of proteins; self-driving cars, voice assistants, and robots are automating many human tasks. But it’s too early to conclusively say machines have become intelligent.