Listen to this story
|
Years before ChatGPT pushed the AI world into an uncontrollable frenzy, an interview of Elon Musk with Sam Altman set an ominous warning to what AI can do. Musk spoke about the need to democratise AI technology to prevent the control of this powerful tech in the hands of a few, and emphasised on how AI can be used in a bad way. This was in 2016. Here we are, seven years later, with AI fathers and scientists sounding the same alarm, only that AI tech is now far more advanced, and all the experts are not united in fearing the AI risks that lie ahead.
Crying Baby Gets the Milk
Many movements have surfaced in the last few months in the hope to bring some form of control before the reins are totally lost. An open letter to pause development of AI models more powerful than GPT-4 was signed by close to 30k people in May, including AI expert Yoshua Bengio, Gary Marcus, Elon Musk , Steve Wozniak, and others. A Congressional hearing on AI that took place a few weeks ago was probably the first step of how government bodies are getting involved, where Sam Altman and Gary Marcus put forth the need to regulate AI.
There was another joint statement that spoke about AI risks and equated it to the nuclear war which was signed by AI fathers Geoffrey Hinton, Yoshua Bengio, Google DeepMind’s Demis Hassabis, and others. However, AI scientists and experts are at loggerheads when it comes to finding a consensus on AI risks.
Sleepless In AI Lands
Bengio recently expressed his concerns over the potential misuse of AI which made him feel lost and question his life’s work. One of his biggest fears is ‘bad actors’ misusing AI to cause harm. The consequences will be disastrous if it gets into the hands of the military, terrorists or any wrong person who can tune these systems to do bad things.
Source: Yoshua Bengio
In an elaborate post, Bengio explains his fear on how losing control over systems is what would lead to rogue AI. With the systems becoming more advanced and autonomous, it is possible that humans can lose control over them leading to catastrophic consequences for humanity. The misalignment between AI and human values can lead to a scenario where the AI systems take decisions autonomously without considering a human’s well-being.
Bengio talks about a possibility of a future scenario where if we ask AI to fix climate change, it would probably design a virus that would eliminate human population owing to either lack of clear instructions or for considering humans as the main facilitator to climate crisis.
Last month, Hinton left Google to vocally bring forth his apprehensions of AI technology. One of Hinton’s biggest fears is the threat to humanity. He believes that if people go after making profits in AI development, it can lead to a future where AI-generated content will surpass those created by humans thereby jeopardising one’s existence.
With AI models having fewer neural networks than humans but being able to possess 1000 times the knowledge and to continuously learn and share knowledge can be alarming. He also fears the spread of misinformation with the rise of chatbots. Bias and prejudiced notions that creep into chatbots can be detrimental to society.
Meeting at the Crossroads
It all boils down to the need for bringing AI regulations, something even Altman has been regularly championing in all of his events as part of his world tour, even expecting China to help with formulating AI security guardrails.
In a recent interview with Andrew Ng, Hinton focused on the need to come to consensus on matters in order to move together. Similar to how climate scientists have a rough consensus on climate change, AI scientists should also come together in order to create good policies. He also feels that if the scientists could list out and have a shared view on some of the technical questions pertaining to AI, then it would help advance towards a consensus on AI risks. If AI researchers came up with all kinds of opinions, it would be easier to choose as well.
Bengio emphasised that as AI systems become super intelligent and autonomous, the need for governance and regulation to mitigate AI risks becomes a necessity. He also spoke of the need to properly track the progress of these models. Governments should be able to monitor and audit them, similar to how you have for any other sectors such as automobiles and pharmaceuticals. Akin to some form of ethical training, the need to have a sort of certification for people working on such systems also becomes a necessity.
The AI fathers will continue to be fuelled by anxiety until a solid move happens. As Hinton mentioned, consensus is probably the next step to working towards mitigating AI risks.
While you have Marcus and Hinton crusading for the regulation to prevent AI catastrophes, AI expert Yann LeCun believes the fears are unwarranted. He calls AI doomers and pessimists as people who can imagine catastrophic scenarios but lack ideas to prevent them. With Meta’s progress on AI, the company is where OpenAI was three years ago with GPT 2. Probably why anticipating AI risks is a challenge for him.