MITB Banner

Geoffrey Hinton Is The Bad Dad of AI

Visionary Geoffrey Hinton recently left Google to speak out about the dangers of AI
Share
Listen to this story

The touted grand dad of deep learning, Geoffrey Hinton recently quit Google so he could talk more freely about the threats posed by AI. He will also be responding to requests for help from Bernie Sanders, Elon Musk and the White House, he said. But a few years ago when the ethical team at Google raised alarm about the big tech’s unethical practices, the AI Prometheus turned a blind eye towards the subject. 

In the past, Hinton has expressed concern over the potential AI danger, likening it to the creation of the atomic bomb by Robert Oppenheimer’s work on the Manhattan Project to develop the world’s first nuclear bombs during World War II. The 75-year old polymath believes that the pursuit of profit in AI development could lead to a world where AI-generated content outnumbers that produced by humans, thereby endangering our very survival.

The Oppenheimer Fallacy

In the past, when asked about the potential harm of AI, Hinton paraphrased Oppenheimer, saying that when one encounters something technically sweet, one goes ahead and pursues it.

However, Hinton now expresses regret over the consequences of his work. He acknowledges that the far fetched idea of machines surpassing human intelligence, is now a realistic possibility. Hinton, who previously believed that such advancements were still “30-50 years away”, cites the recent advancements in large language models, particularly OpenAI’s GPT-4, as evidence of how quickly machines are advancing. 

He said, “Look at how it was five years ago and how it is now. Take the difference and propagate forwards. That’s scary.”

Interestingly, the nuclear weapons analogy also resurfaced in the Stanford Artificial Intelligence Index Report 2023 which was released last month. The breadth of AI’s applications is unlike any other field. The report notes that 36% of NLP (natural language processing) researchers polled think that artificial general intelligence (AGI) could lead to a catastrophic event on par with a nuclear disaster. 

While the analogy provides a helpful point of reference, it has its limitations. AI incorporates elements from social media to nuclear weapons. Hence, analogies, like the Oppenheimer comparison, can be illuminating yet incomplete when describing the scope of AI. 

Hinton on the fence

Addressing the NYT article by Cade Metz that suggested Hinton left Google in order to criticize the company, he clarified that he left Google to speak out about the dangers of AI without being constrained by any potential impact on the company. He further noted that Google has acted responsibly in its pursuit of AI.

But we do not agree with him.  

Ethically, Google has been in a state of flux since 2020. The big tech fired its ethics team. Prominent Black female scientist Timnit Gebru, who was the first one to show the exit door, responded to Hinton’s quitting saying, “When Geoff Hinton was asked about the women’s concerns about AI for which we got fired, pushed out, harassed he said our concerns are not “existential risks” to humanity where as his are. This is what I mean about the almost exclusively white dudes who keep on talking about “existential risk.”

Margaret Mitchell, a former leader on Google’s AI ethics team, is also upset that Hinton didn’t speak out about ethical concerns related to AI development during his decade in a position of power at Google, especially after the 2020 ouster of Timnit Gebru, who had studied the harms of large language models before they were widely commercialized into products such as ChatGPT and Google’s Bard.

In 2018, Hinton had dismissed the need for Explainable AI. He voiced his opposition, arguing that it would be a “complete disaster” if regulators insisted that AI systems be explainable. The Canadian AI pioneer claimed that most people cannot explain how they work, and therefore requiring such explanations for AI systems would be counterproductive.

Since the 1970s Hinton has been at the forefront of developing models for visual understanding of the human brain. Although he had previously maintained a detachment from the social impact of his work and claimed “I’m an expert on trying to get the technology to work, not an expert on social policy”, Hinton’s resignation cited concerns over the dangers of AI as the reason for his change of heart. This shift in stance suggests that Hinton can no longer remain neutral and must now acknowledge the potential impact of his work on society.

PS: The story was written using a keyboard.
Share
Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India