Listen to this story
|
The battle lines in AI research have been drawn clearly. There are factions that believe AI will end humanity as we know it, led by notable researcher and AI doomsdayer Eliezer Yudkowsky. Some who are newly reformed and critical of the direction in which AI progress is headed like Geoffrey Hinton, Godfather of Deep Learning, who resigned from Google Brain a couple of days back. But what we do know, without a doubt, is that AI even in its current half-baked state is capable of controlling us.
Can LLMs be controlled?
Which brings us to the question of whether something that’s not quite as smart as us can in fact up control us? According to a Geoffrey Hinton quote, this happens more often than we fully realise. Political leaders, managers we report to, gurus we pray to and not to mention our cats who have us running circles around them – are all not necessarily smarter than us.

Meta AI’s Chief Scientist Yann LeCun sees no problem with this. Just yesterday, LeCun tweeted, “We can design AI systems to be both super-intelligent *and* submissive to humans. I always wonder why people just assume that intelligent entities will necessarily want to dominate.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
That’s just plain false, even within the human species.”

LeCun thinks that for machines to be in control, they should “want to take control” and our instant assumption that they will obviously dominate humans is purely drawn from science fiction dreams.
LeCun isn’t going against the grain here. AI researcher and inventor of the Markov Logic Network in ML, Pedro Domingos also tweeted saying, “You’re already being manipulated every day by people who aren’t even as smart as you, but somehow you’re still OK. So why the big worry about AI in particular?”
Domingos and LeCun both rest easy on the logic that LLMs for a fact do not have “agency” like humans. More than anything, it looks like LeCun is trying to put a stop to AI fear mongering, repeating that superhuman AI systems were still somewhat at a distance from us. “Gods and superhuman AI systems have a few things in common: They are invented by people. People fear they may run the world. People fight about what it all means. They don’t actually exist,” he tweeted.
I keep warning that once GPT's hacking skills improve, all computers will become unusable.
— Liron Shapira (@liron) April 29, 2023
"Can't a good antivirus AI protect us?"
No. We don't know how to control any superintelligent AI.@PaulFChristiano, a top AI researcher, thinks our "good" AI would just turn against us! pic.twitter.com/xKa4OedgZH
But none of this can refute the fact that modern AI models are normally built in a way that intent may just elude them. Deep neural networks – that most ML is based on – are able to absorb huge amounts of data and process it, have a black box which makes the internalised process pretty much invisible even to their makers.
How to control an AI system
Nick Bostrom’s ‘Superintelligence’ has also discussed the mechanisms to solve AI’s control problem at length. Bostrom stated that containing AI to control it might also mean that we have to eventually forgo its benefits. He then went on to show instances of how even well-intentioned methods of using AI could very easily backfire.

Say, a superintelligence that was given the task of ‘maximising happiness in the world,’ might find the most efficient way to do this by simply destroying all life on earth and generate faster computerised simulations of happy thoughts. Bostrom theorised that even with very little communication it wasn’t a full guarantee that superintelligence could be completely safe.
A JAIR or Journal of Artificial Intelligence Research study titled, ‘Superintelligence cannot be contained: Lessons from Computability Theory’ by Google engineer Lorenzo Coviello and University of Melbourne professor Andres Abeliuk among others, stated explicitly that “containment (of AI) in principle, is impossible, due to fundamental limits inherent to computing itself.”
And if LLMs are too limited to warrant these fears, it could be argued that AI is already improving itself. Last year, a paper titled, ‘Self-Programming Artificial Intelligence using Code-generating Language Models’ showed how researchers could programme a model capable of autonomously editing its own source code to become better. ChatGPT, too, can not only fix bugs in its code but also explain why it was doing so.
ChatGPT could be a good debugging companion; it not only explains the bug but fixes it and explain the fix 🤯 pic.twitter.com/5x9n66pVqj
— Amjad Masad (@amasad) November 30, 2022
Maybe we should all turn to look at Hinton himself, the man who was practically responsible for the biggest leap in deep learning, who recently tweeted saying, “If we did make something MUCH smarter than us, what is your plan for making sure it doesn’t manipulate us into giving it control?”
Hinton is right, there is no plan in place. And none have been more open about how clueless they are than OpenAI chief Sam Altman. The maker of the GPT models has recently come out and stated that the consequences of AI were a toss-up and could either be “terrifying or awesome.” What if things went south? “It’s lights-out for all of us,” he responded.