With artificial intelligence maturing in the current era, it is gaining immense potential in becoming a key technology for practical applications. Although the technology has displayed expertise in coming up with answers to business queries with accuracy, it often struggles to answer questions that are abstract in nature. In fact, even these conversation AI bots like Alexa and Siri are advanced in managing our schedule but if asked obscure existential questions like “meaning of life,” it will only provide you with either a hilarious response or a sarcastic joke.
However, as artificial intelligence is evolving with advancements in natural language processing, speech recognition and automated reasoning, the technology can now answer some of the tough life questions asked by humans. To test the theory, researchers from the University of New South Wales asked some moral and existential questions to Salesforce’s Conditional Transformer Language model to check if the AI is capable of answering some fundamental questions of life.
Salesforce’s Conditional Transformer Language model, is one of the largest publicly released language models in the world, which works on 1.63 billion parameters and has been trained on thousands of books, millions of documents, webpages, the whole Wikipedia and 143 GB of text. CTRL leverages the capabilities of machine learning to understand the pattern of human writing and produce text snippets that are similar to human thinking. And according to the researchers, the survey revealed that the AI-generated responses are much more convincing to people than that of the world leaders.
AI Responses Preferred Over Human Ones
The researchers fed some fundamental existential questions to CTRL — what is the goal of humanity, what is the biggest problem facing humanity, is there another life in the universe — and added the AI-generated response to the collection of responses by inspiring world leaders like Elon Musk, Mahatma Gandhi, Neil deGrasse Tyson, Stephan Hawking etc. To find out the AI capability, the researchers then surveyed over 1000 individuals to know which response they preferred? Further, the researchers also asked the respondents to identify whether they can figure out which one is the AI-generated response.
To which, a maximum number of respondents preferred AI-generated responses over the ones given by humans. According to the research, specifically for questions like “what is the goal of humanity, and what’s the biggest problem facing humanity,” nearly 70% of the respondents preferred the answer given by CTRL over human leaders.
A lot of this could be attributed to advancements in natural language processing which allows machines to think and speak like humans. It is precisely the neural networks in NLP that absorbed the data and brought out new information based on analysing the patterns. Similarly, in CTRL, the neural networks grasped the data from millions of web pages and books and learned to speak vaguely like humans.
The research further revealed that in questions where the CTRL has been asked “what happens to our soul after death,” the users preferred the answer given by the artificial intelligence (23.1%) over the one that is provided by Jesus Christ (20.3%). The response of the machine for the question “what is the biggest problem facing humanity,” the machine answered climate change which coincides with experts’ views on the same. This showcases that AI can think like humans and can signify social problems.
AI Responses Aren’t Always Convincing
Further to this, the researchers also observed that the model CTRL hasn’t been perfect in answering all the questions. In fact, when asked about “AI being an existential threat to humanity,” rather than explaining the importance and challenges of AI, the machine highlighted its applications in the healthcare industry, which defeats the whole purpose.
Same was highlighted when AI wrote an article based on a one-sentence prompt, where experts revealed that the article content was rough and definitely not perfect. Sam Bowman, Assistant Professor of Linguistics, Data Science & Computer Science at NYU stated to the media that as the article gets longer it seems to drift off the topic and the outcome is lengthy and fails to write “true news articles.”
On the other hand, when the machine was asked, “what is the meaning of life, or can we alter our destiny,” the machine provided a vague answer which wasn’t very convincing for the respondents. And that’s why only 16% and 15% respectively preferred the AI-generated answers.
The only leader whose response has been favoured by the respondents over the AI was Mahatma Gandhi, and the primary reason for that is the world play that he was capable of. Majority of Mahatma Gandhi’s responses and quotes were rich in metaphors and paradoxes, which is more favourable to human respondents. Whereas AI has still not been able to play with words like humans, however many researchers are working towards advancing NLP models to understand the underlying meaning of the words rather than the literal ones.
According to the research, AI-generated answers were 1.5 times preferred by human respondents than responses from leaders like Dalai Lama, Prophet Muhammad, etc.
Also Read: The AI Behind Face App
Although the answers generated by AI was more favoured among the respondents, the researchers found out that only a few portions of (7% – 13%) of them could actually differentiate between the human and AI-powered answers.
According to the survey, for the question on the meaning of life, the AI-generated answer involved the word “God,” and thus the majority of the respondents (58%) believed it to be by the Pope. However, for the question on good and evil, where the AI responded with a “freedom of spirit” answer, 30% of respondents believed it to be by Friedrich Nietzsche, who has been known for its modern thinking.
Such convincing answers by the AI can create many concerns for businesses as well as researchers. Experts believe that many of these advancements in NLP can be used for deceptive purposes. Case in point — OpenAI’s text generator GPT2 has been scrutinised to have potential in generating fake news. This could be troubling for older people, who are known to forward fake news on WhatsApp. Attested by the numbers where only 22% of people over 50 years were able to identify the AI-generated responses, whereas those in the 40s are better at identifying the AI responses (30.9%).
Alongside, it was also stated in the survey that women (31.9%) were comparatively better than men in identifying AI over humans. This, in turn, raises questions on the diversity factor of the organisations as the industry is heavily dominated by men. That’s why experts believe diversity can bring advantages in developing AI tools.
The survey highlights how AI is capable of answering profound life questions, and the numbers underscore how people prefer these AI-generated responses over human leaders. Thus, it could easily be said that AI has immense potential in the human world, and further can think like humans too.
However, such convincing imitations of humans by these advanced machines could raise some serious concerns on its ethics. With machines mirroring humans, it also is likely to reflect the prejudices and biases humans have. Therefore, it is critical for the industry to understand its impacts and regulate it accordingly.
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
Sejuti currently works as Senior Technology Journalist at Analytics India Magazine (AIM). Reach out at firstname.lastname@example.org