Listen to this story
|
Have you ever questioned the very fabric of your reality? Do you ever wonder if what you perceive as the real world is nothing more than a computer-generated simulation controlled by a superior artificial intelligence? The theory was popularised by Nick Bostrom as the simulation hypothesis, suggesting that our very existence is no different than that of a character in a video game.
Bostrom is an acclaimed thinker on the safety concerns associated with our march towards the increasingly powerful and general forms of AI. The polymath believes AI is “the single-most important and daunting challenge that humanity has ever faced”. Currently, he is a professor at the University of Oxford and director of the Institute for the Future of Humanity. Over the years, his work has profoundly impacted minds such as Stephen Hawking, Bill Gates, and Elon Musk.
In an exclusive interview with Analytics India Magazine, Bostrom expresses his concerns on artificial intelligence. “If we manage to do things right, the upsides are fantastic,” said the Swedish philosopher, highlighting three broad concerns in AI — value misalignment, automation bias, and asymmetries.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
What Keeps Bostrom Awake At Night?
Bostrom told AIM that there is the alignment problem, which refers to the challenges caused by the fact that machines do not have the same values as humans. “How do we ensure that highly cognitively capable systems — and eventually superintelligent AIs — do what their designers intend for them to do?” he pondered. Bostrom has delved deeper into the unsolved technical problem in his book ‘Super Intelligence’ to draw more attention to the subject. Meanwhile, the most infamous instance of misaligned AI happens to be Meta’s racist BlenderBot that hated Mark Zuckerberg.
The professor’s second concern is the relatively new concept of governance. It’s the problem that stems from the assumption that AIs will be under complete human control, ensuring technology is predominantly used for positive ends and the benefits are widely shared. For instance, the problematic deepfakes, which have surfaced on the internet.
Lastly, there is the problem of ensuring that if we are creating digital minds that have moral status, those minds are themselves treated with due ethical consideration. The bizarre incident of users acting abusively towards AI chatbot Replika was highlighted less than a year ago, along with many similar cases.
“In addition to protecting human interests, we must also ensure that digital minds don’t suffer and that their interests are taken properly into account,” Bostrom suggested.
The solution
Most AI companies today are making efforts to work on these concerns. The first one that comes to mind when we talk about ‘aligned research’ is OpenAI, where most of its models follow human intent, along with human values. “At the time [2014], this problem was almost completely neglected, but it is now becoming increasingly recognized by more mainstream AI researchers,” resonated Bostrom. Even Google has a 34-page elaborate document on ways the tech giant is tackling the issue of AI governance.
“At the meta-level, I think it would be desirable to try to work through these moral questions in a more thoughtful and curiosity-driven way than is often done,” he said while suggesting to look at these problems on a case-by-case basis.
He also thinks many people are too quick to pick a side, and then seek to demonise those who take the other side. “We’d do better if we try harder to listen, reflect, and try to apply the strictest criticism to our own views,” he added.
Explaining the reason why Bostrom works to make complex and abstract concepts accessible, he said, “Well, the issues at stake concern us all. That is one reason to try to make them as widely accessible as possible. Another is that no one group of experts has all the relevant knowledge and perspectives. A third is that sometimes, when one tries to explain things clearly in the non-technical language, it can help one understand things better oneself.”
This makes total sense as several AI researchers have been advocating for a collaborative effort rather than a closed-door approach in the field. “Development by smart and well-resourced people behind closed doors can make great things, but development in the open by a huge community of people is just a more effective and equitable mode of development,” Colin Raffel, a faculty researcher at Hugging Face and professor at UNC Chapel Hill, had earlier told AIM in an exclusive interview
We’re All Trumans
The idea that our universe, including ourselves and our innermost thoughts, is a computer simulation, has permeated the culture high and low. In an influential essay in 2003, Bostrom proposed the idea that “technologically mature” civilizations could use a tiny fraction of their computational power to probably explore their histories. The co-founder and CEO of Tesla, Elon Musk, echoed this idea once declaring that there was only a one-in-a-billion chance that we lived in “base reality”.
Recalling the essay, Bostrom said, “So conditional on there being such civilizations that have an interest in doing this, we should think we are likely to be among the typical simulated minds, rather than the rare non-simulated minds, given that from the inside it would not be possible to tell the difference.”
Currently, Bostrom is working on an interesting book project, which he did not reveal much about. But he said that he is trying to understand what world order could look like in which humans and AIs and all kinds of digital minds live together harmoniously. This might well be a teaser to his next book!