Listen to this story
AGI may be seen as the ultimate goal by some in the AI community, but Michael Irwin Jordan, professor at UC Berkeley, believes it to be a “lazy person’s aspiration”.
In an exclusive interview with AIM, Jordan said that instead of creating a Frankenstein-like AGI, we should focus on developing systems that bring music and choice into people’s homes. Jordan argues that AGI is neither necessary nor sufficient for achieving such tasks.
This statement is a strong rebuke to AI experts like OpenAI’s Sam Altman and Tesla’s Elon Musk, who champion AGI and believe that machines can surpass human intelligence. They naively believe that their current technology is the solution to humanity’s problems or the key to achieving AGI.
On the contrary, Jordan is of the view that AGI is not the ultimate goal and that we should prioritise building impactful systems that benefit humanity. By shifting our focus away from the hype surrounding AGI, we can create technology that truly improves people’s lives.
“Intelligent computers can help us network together the traffic control of aeroplanes. This is done because humans didn’t evolve to take pieces of metal flying through the air and make them safe. So why not let the computer help us with that test?” he pondered, saying that the community is not working at a planetary scale, connecting people in healthy ways to make sure they do not hurt each other. That to Jordan is way more interesting than AGI.
Jordan thinks that happiness, democracy and welfare for humans is not a given. “If we’re not focusing on ensuring that everybody has opportunities, some happiness and resources then we’re missing the boat and I do not believe that simply creating AGI somehow solves those problems,” he said.
Drawing a comparison between AI and electrical engineering, which had a positive impact on human beings, Jordan said, it had people thinking, we have these electrons that could create fireworks but we could also make houses warm, bring lights and use it to cook food. That’s how humans operate. Our skill set gets leveraged by new tools, the ability to do experiments and build devices instead of asking some AGI agent to solve those problems for us.
Jordan has mentored over 150 students at MIT and UC Berkeley
Michael Jordan is a renowned computer scientist and statistician who has made significant contributions to the fields of machine learning and artificial intelligence. Andrew Ng and David Blei, both prominent figures in the field of machine learning, have cited Michael Jordan as one of their mentors and influences in their work. Zoubin Ghahramani and Yoshua Bengio are also some of his notable students.
“It’s really great to be an academic mentor. You get to be with people for five years while they learn and then take maybe a year to solve challenging problems. So the overall tree is very large. It’s been a wonderful part of my life to be part of that process,” Jordan gladly remarked.
Jordan’s first love was statistics and analysing data in a believable way.
“I saw how you could automate statistics and think about computers not just as things you program, but as things that can make models of the world, predictions and help us,” he said.
Jordan entered the artificial intelligence realm around the era of search engines when the internet had just started to become powerful. To him, it was this democratic thing supported by algorithms which humans could use to spread knowledge worldwide.
Currently, his students at UC Berkeley are working on things like uncertainty quantification of predictions. The 67-year-old professor is also very excited about work on learning-based economic mechanisms. There’s also work on an economic model for federated learning where the students are trying to bring data from many sources together. The vision of his students is to do so in a manner where the agent at the end of the link is being incentivized to be part of the system and can walk away if they don’t feel the incentives are strong enough.
Elaborating on the increase of usage of ChatGPT in businesses, he said people need to realise ChatGPT is an analysis of existing data. It is a huge network. You ping the probe that part of the network which at the end of the day is created by humans. It’s able to bring all that together in interesting but surprising ways.
“Language models just being programmed to try to predict the next word is true, but it’s not the dunk some people think it is,” tweeted Gary Marcus, a leading voice in AI. He is not the only one who agrees with Jordan on large language models, AI stalwarts Yoshua Bengio and Yann LeCun have expressed similar views in their exclusive conversations with AIM.
“I read that someone kept probing it and it went a little crazy. It is generating text based on past texts and cannot reach into the real world. Not just the physical world, but also the social world. It’s a pale shadow of that,” Jordan opined.
Furthermore, he said, the wave of hysteria and hype around GPT-4, too, alarms him a bit. He believes that the focus of AI should be on the cooperative links between humans and machines, using technology to augment human capabilities rather than to mimic isolated human intelligence. “It’d be wise to think about what particularly will help humans the most. If you’re going to put people out of work, don’t do it so fast that it breaks the economic system. Do it in a staged way so people have time to realise what’s happening and make plans around that,” he suggested.
In 2018, Jordan wrote a Medium article, ‘Artificial Intelligence — The Revolution Hasn’t Happened Yet’ and believes it stands corrected today. There are aspects of pattern recognition and generative AI that have proceeded faster than he expected. But noting the limitations he said, “They are not systems that I would trust with making life and death decisions. They cannot reason yet,” he said.
There are difficult aspects of human interaction such as having a deep appreciation of the semantic distinctions and being able to reason with those. In some ways, machines today are surprisingly good at mimicking, but mimicking is not the same thing as really being able to do it, Jordan believes.
There’s a big difference between generating solutions involving reasoning, counterfactuals, interacting with others, and using extensive knowledge based on experience. Together with all that, Jordan does not think the goal should be to imitate humans. Explaining the difference, he said, “We have thousands of aeroplanes that fly around. They don’t hit each other because it’s all coordinated. The right level to be thinking about is not the individual entity that’s supposed to be super smart but to make networks smarter and make them augment human capabilities, not replace.”
It’s more about trade and making lives a little bit better. But how do we make sure that those trade offs are a big part of the system?
An ethical system is partly one that thinks about economics, a field that talks about interactions between humans, he said. “If you build a system that doesn’t work and people are depending on it, it’s unethical. Ethics is thinking about how to build a good system, one that brings real value to human beings. Then part of it is more philosophical and legal,” he added.