The fear of an impending robot apocalypse is on the rise. Renowned personalities like Stephen Hawking claims that AI could wipe out human race. Science fiction author, Issac Asimov had predicted such a thing years back.
Looks like Hawking is not the only dismissive about the potential of an AI takeover. Some of the world’s most prominent scientists, researchers, engineers, and authors share the same fear and visions of a world dominated by robots in the future. Let’s glance through some of the most impressive quotes spoken by famous personalities, reflecting upon the surrounding that the technology.
Sign up for your weekly dose of what's up in emerging technology.
Quote: “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern”
Elon Musk: Tesla CEO, Elon Musk and OpenAI founder strongly recommends that we approach AI with a regulatory oversight, both at national and international level. This will help in ensuring that we don’t end up doing something stupid with the technology.
Quote: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that”
Gary Marcus: The cognitive science professor expresses his gears surrounding “technological singularity” or “intelligence explosion.” Marcus mentions that humans might have to struggle against AI for ownership of resources in the near future.
Quote: “Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called technological singularity or intelligence explosion the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed”
James Barrat: The renowned author of the book ‘Our Final Invention: Artificial Intelligence and the End of the Human Era’ expresses his fear surrounding AI and the related advances in the landscape. Barrat speaks about his fear surrounding the technology from his interaction with other people highly placed in the AI landscape.
Quote: “I don’t want to really scare you, but it was alarming how many people I talked to who are highly placed people in AI who have retreats that are sort of ‘bug out’ houses, to which they could flee if it all hits the fan”
Jaron Lanier: The virtual reality pioneer stresses his concern about AI development through the example of human brain. Most of the AI developers try to replicate the human brain. What perturbs Lanier is the fact that towards the end of the day, we’ll have a huge mix of several brains working along the lines of a real brain. Such a complex intelligence could be tough to handle or control.
Quote: “We’re still pretending that we’re inventing a brain when all we’ve come up with is a giant mash-up of real brains. We don’t yet understand how brains work, so we can’t build one”
Max Tegmark: The Swedish-American cosmologist fears that advances in the field of Artificial Intelligence could push technology outsmarting humans across several areas. Tegmark also stress on how AI could out-fox human leaders, or out-invent human researchers, or even develop weapons we can barely comprehend.
Quote: “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand”
Nick Bilton: The former New York Times tech columnist also has his fears about the social implications of AI. Bilton believes that advances in AI could escalate quickly, leading to scarier and cataclysmic consequences.
Quote: “Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease”
Nick Bostrom: The philosopher has a different concern surrounding AI. Bostrom’s fear stresses on whether with all the AI advances, will robots be able to share the values associated with wisdom and intellectual development in humans.
Quote: “We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans — scientific curiosity, benevolent concern for others, spiritual enlightenment and contemplation, renunciation of material acquisitiveness, a taste for refined culture or for the simple pleasures in life, humility and selflessness, and so forth”
Stephen Hawking: The world-renowned theoretical physicist also fears the advent of AI, and the impact it might leave on humanity. Hawking stressed on the fact that humans are limited by slow biological evolution, and thus would be superseded, if there’s a struggle between AI and humanity. He believes in the tussle between humanity and AI, AI will win hands down.
Quote: “The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever-increasing rate”
Vernor Vinge: A mathematician and fiction writer by profession, Vinge coined the term “singularity” to describe the inflection point when machines outsmart humans. Vinge perceives singularity as an inevitability, even if international rules emerge controlling the development of AI. Vinge sees this as a major concern for future of humanity.
Quote: “The competitive advantage – economic, military, even artistic – of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first”