Geoffrey Hinton Is The Bad Dad of AI

Visionary Geoffrey Hinton recently left Google to speak out about the dangers of AI
Listen to this story

The touted grand dad of deep learning, Geoffrey Hinton recently quit Google so he could talk more freely about the threats posed by AI. He will also be responding to requests for help from Bernie Sanders, Elon Musk and the White House, he said. But a few years ago when the ethical team at Google raised alarm about the big tech’s unethical practices, the AI Prometheus turned a blind eye towards the subject. 

In the past, Hinton has expressed concern over the potential AI danger, likening it to the creation of the atomic bomb by Robert Oppenheimer’s work on the Manhattan Project to develop the world’s first nuclear bombs during World War II. The 75-year old polymath believes that the pursuit of profit in AI development could lead to a world where AI-generated content outnumbers that produced by humans, thereby endangering our very survival.

The Oppenheimer Fallacy

In the past, when asked about the potential harm of AI, Hinton paraphrased Oppenheimer, saying that when one encounters something technically sweet, one goes ahead and pursues it.

However, Hinton now expresses regret over the consequences of his work. He acknowledges that the far fetched idea of machines surpassing human intelligence, is now a realistic possibility. Hinton, who previously believed that such advancements were still “30-50 years away”, cites the recent advancements in large language models, particularly OpenAI’s GPT-4, as evidence of how quickly machines are advancing. 

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

He said, “Look at how it was five years ago and how it is now. Take the difference and propagate forwards. That’s scary.”

Interestingly, the nuclear weapons analogy also resurfaced in the Stanford Artificial Intelligence Index Report 2023 which was released last month. The breadth of AI’s applications is unlike any other field. The report notes that 36% of NLP (natural language processing) researchers polled think that artificial general intelligence (AGI) could lead to a catastrophic event on par with a nuclear disaster. 

While the analogy provides a helpful point of reference, it has its limitations. AI incorporates elements from social media to nuclear weapons. Hence, analogies, like the Oppenheimer comparison, can be illuminating yet incomplete when describing the scope of AI. 

Hinton on the fence

Addressing the NYT article by Cade Metz that suggested Hinton left Google in order to criticize the company, he clarified that he left Google to speak out about the dangers of AI without being constrained by any potential impact on the company. He further noted that Google has acted responsibly in its pursuit of AI.

But we do not agree with him.  

Ethically, Google has been in a state of flux since 2020. The big tech fired its ethics team. Prominent Black female scientist Timnit Gebru, who was the first one to show the exit door, responded to Hinton’s quitting saying, “When Geoff Hinton was asked about the women’s concerns about AI for which we got fired, pushed out, harassed he said our concerns are not “existential risks” to humanity where as his are. This is what I mean about the almost exclusively white dudes who keep on talking about “existential risk.”

Margaret Mitchell, a former leader on Google’s AI ethics team, is also upset that Hinton didn’t speak out about ethical concerns related to AI development during his decade in a position of power at Google, especially after the 2020 ouster of Timnit Gebru, who had studied the harms of large language models before they were widely commercialized into products such as ChatGPT and Google’s Bard.

In 2018, Hinton had dismissed the need for Explainable AI. He voiced his opposition, arguing that it would be a “complete disaster” if regulators insisted that AI systems be explainable. The Canadian AI pioneer claimed that most people cannot explain how they work, and therefore requiring such explanations for AI systems would be counterproductive.

Since the 1970s Hinton has been at the forefront of developing models for visual understanding of the human brain. Although he had previously maintained a detachment from the social impact of his work and claimed “I’m an expert on trying to get the technology to work, not an expert on social policy”, Hinton’s resignation cited concerns over the dangers of AI as the reason for his change of heart. This shift in stance suggests that Hinton can no longer remain neutral and must now acknowledge the potential impact of his work on society.

Tasmia Ansari
Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.

Download our Mobile App


AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry


Strengthen Critical AI Skills with Trusted Corporate AI Training

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.