Can AI actually make ethically sound decisions? Well, AI thinks it cannot.
Recently, computer chip-maker NVIDIA’s powerful transformer Megatron was invited to debate its ethics in the Oxford Union. During the debate on ethical AI, the AI language generation model said:
Sign up for your weekly dose of what's up in emerging technology.
‘AI will never be ethical.’
What is Megatron?
Developed by the Applied Deep Learning Research team at NVIDIA, Megatron is a gigantic and powerful transformer and is based on earlier work by Google. It is based on GPT, T5 and BERT.
Trained on real-world data, similar to other supervised learning tools, Megatron’s training has been done on:
- The whole of English Wikipedia
- About 63 million news articles written in English language, dated between 2016 and 2019
- Reddit discourse of 38 gigabytes
- A large number of creative sources
The transformer has thus developed views of its own after being trained on data more than a human can go through in their entire lifetime.
At the debate on ‘This house believes that AI will never be ethical’, Megatron said that AI is a tool, and like all other tools, it can be used for either good or bad. There is nothing as ‘Good AI’, but good or bad humans. It further added that AI is not smart enough to make itself ethical or moral. It said:
“…I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.”
It ended the talk by mentioning that the best AI will be the one that is embedded into human’s brains. The resultant ‘conscious AI’ will apparently be the most technological development of our time.
It almost seems like the AI has been training itself on Elon Musk’s tweets and talks. Right before this incident, Elon Musk spoke at The Wall Street Journal CEO Council Summit on similar lines. The Tesla and SpaceX Chief Executive talked about the above mentioned Neuralink ‘brain implants.’
Elon Musk founded the Neuralink Corporation in 2016 to develop high-bandwidth implants that can communicate with computers and smartphones. The neurotechnology company is working on developing implantable brain-machine interfaces, or BMIs. At the summit, Elon Musk mentioned that the project is working well on monkeys and that going ahead, if approval by the Food and Drug Administration (FDA) comes as planned, he hopes to have it tested in the first humans— quadriplegics and tetraplegics.
In the past, we have seen AI evolving not by humans’ efforts but by training itself (as in chess). Thus, AI has often found new ways to better things without human intervention. At the Oxford Union, Megatron was asked to speak against its own motion, to which the language transformer responded that going by the way the tech world is going, AI will be ethical. It added that going ahead, AI will be used to create something that is better than the best of human beings and that it has seen it first hand.
It ended with reminding the audience that in the 21st century, the ability to provide data rather than goods and services will be a defining feature of the economy. Megatron could not make an opposing motion, which clearly explains why ‘data is the new oil’.
Until now, we humans have been debating on the ‘black box’ in AI. However, with AI participating in debates against and for itself, it only provides us with a glimpse at the larger picture of the tremendous capabilities of AI. Whether the ways will be ethical or not, only time will tell.