MITB Banner

Geoffrey Hinton Raises Concerns Over Profit-Driven AI Development, Urges Caution

Geoffrey Hinton and Andrew Ng, the two pioneers of AI, came together for an interesting discussion on AI threats and risks. 

Share

Listen to this story

Geoffrey Hinton and Andrew Ng, the two pioneers of AI, came together for an interesting discussion on AI threats and risks. 

Hinton recently left Google to discuss AI threats. He supported the likes of Elon Musk, Steve Wozniak, Yoshua Bengio, Gary Marcus and several other AI experts who signed an open letter for the pausing of AI development beyond GPT-4. However, Ng is not in favour of the “pause”.

While Hinton now expresses concern over AI dangers, he previously ignored ethical concerns raised by Google’s own team. He compares the potential risks of AI to the creation of the atomic bomb during World War II, emphasising the dangers of profit-driven AI development that could result in AI-generated content surpassing human-produced content and jeopardising our survival.

Read more: Tech-giants, self-regulation, and free speech

Hinton Calls for Unity as AI Research Faces Diverse Opinion

However, during an insightful conversation with another pioneer of AI, Andrew Ng, both discussed that AI researchers need to reach a consensus similar to climate scientists on climate change whereby it proved that there has been a substantial increase in the Earth’s temperature since the latter half of the 19th century. The main reason behind this is attributed to human actions, predominantly the release of greenhouse gases into the atmosphere.

“If there are diverse opinions among AI researchers, it becomes easier for others to cherry-pick opinions that suit their agendas,” added Hinton. He continued to say that there is a significant diversity of opinions and even conflicting factions. It would be great to move past this phase and reach a point where researchers agree on the main threats posed by AI or at least agree on some of the major threats and their urgency and danger. This is because policymakers and decision-makers will seek technical opinions from researchers.

Read more: Big-tech Regulation: India, Drop the Dubiety & Go the EU Way

The Urgent Need for Consensus

Another important point discussed is the need for researchers to urgently reach a consensus on whether LLM chatbots like ChatGPT or Bard truly understand what they are saying or are statistical constructs is an important point discussed. While some believe they understand, others disagree. Resolving this issue is crucial for achieving consensus on AI-related matters.

The challenge in assessing understanding lies in identifying the appropriate tests for determining its presence in a system. Large language models and AI models appear to be constructing a world model, suggesting some level of understanding. However, this is a personal viewpoint. If the research community engages in further discussions about this interface and develops a shared understanding, it can promote more consistent reasoning and improved alignment within the AI community regarding the risks associated with AI. An aspect of this discussion relates to statistics, as we all agree that statistics play a crucial role. 

However, some people who consider it to be solely statistics tend to think in terms of programming or counting co-occurrence frequencies of words. “We believe that the process of creating features or embeddings and the interactions between these features goes beyond mere statistics; it involves understanding,” he added.

By predicting the next symbol based on complex interactions between features, we can make predictions about the probability of the next words. I personally believe that this process represents understanding, akin to what our brains do. However, this is a topic that needs to be discussed within the research community to convince others that these systems are not just statistical constructs and that a shared understanding can be developed to address the risks associated with AI.

“Gaining a better understanding of what AI systems comprehend will likely bring the research community closer to reaching similar conclusions as a community,” he concluded. 

Read more: India Backs Off on AI Regulation. But Why?

Share
Picture of Shritama Saha

Shritama Saha

Shritama (she/her) is a technology journalist at AIM who is passionate to explore the influence of AI on different domains including fashion, healthcare and banks.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India