Listen to this story
|
Geoffrey Hinton and Andrew Ng, the two pioneers of AI, came together for an interesting discussion on AI threats and risks.
Hinton recently left Google to discuss AI threats. He supported the likes of Elon Musk, Steve Wozniak, Yoshua Bengio, Gary Marcus and several other AI experts who signed an open letter for the pausing of AI development beyond GPT-4. However, Ng is not in favour of the “pause”.
While Hinton now expresses concern over AI dangers, he previously ignored ethical concerns raised by Google’s own team. He compares the potential risks of AI to the creation of the atomic bomb during World War II, emphasising the dangers of profit-driven AI development that could result in AI-generated content surpassing human-produced content and jeopardising our survival.
Read more: Tech-giants, self-regulation, and free speech
Hinton Calls for Unity as AI Research Faces Diverse Opinion
However, during an insightful conversation with another pioneer of AI, Andrew Ng, both discussed that AI researchers need to reach a consensus similar to climate scientists on climate change whereby it proved that there has been a substantial increase in the Earth’s temperature since the latter half of the 19th century. The main reason behind this is attributed to human actions, predominantly the release of greenhouse gases into the atmosphere.
“If there are diverse opinions among AI researchers, it becomes easier for others to cherry-pick opinions that suit their agendas,” added Hinton. He continued to say that there is a significant diversity of opinions and even conflicting factions. It would be great to move past this phase and reach a point where researchers agree on the main threats posed by AI or at least agree on some of the major threats and their urgency and danger. This is because policymakers and decision-makers will seek technical opinions from researchers.
Read more: Big-tech Regulation: India, Drop the Dubiety & Go the EU Way
The Urgent Need for Consensus
Another important point discussed is the need for researchers to urgently reach a consensus on whether LLM chatbots like ChatGPT or Bard truly understand what they are saying or are statistical constructs is an important point discussed. While some believe they understand, others disagree. Resolving this issue is crucial for achieving consensus on AI-related matters.
The challenge in assessing understanding lies in identifying the appropriate tests for determining its presence in a system. Large language models and AI models appear to be constructing a world model, suggesting some level of understanding. However, this is a personal viewpoint. If the research community engages in further discussions about this interface and develops a shared understanding, it can promote more consistent reasoning and improved alignment within the AI community regarding the risks associated with AI. An aspect of this discussion relates to statistics, as we all agree that statistics play a crucial role.
However, some people who consider it to be solely statistics tend to think in terms of programming or counting co-occurrence frequencies of words. “We believe that the process of creating features or embeddings and the interactions between these features goes beyond mere statistics; it involves understanding,” he added.
By predicting the next symbol based on complex interactions between features, we can make predictions about the probability of the next words. I personally believe that this process represents understanding, akin to what our brains do. However, this is a topic that needs to be discussed within the research community to convince others that these systems are not just statistical constructs and that a shared understanding can be developed to address the risks associated with AI.
“Gaining a better understanding of what AI systems comprehend will likely bring the research community closer to reaching similar conclusions as a community,” he concluded.
Read more: India Backs Off on AI Regulation. But Why?