For a vast majority of people, it may not be easy to understand why Hanson Robotics Sophia has riled up prominent AI researcher, Facebook’s Head of AI and founding director of NYU’s Center for Data Science Yann LeCun so much. After all, it is viewed as a step towards AGI, human-level AI or is it not?
Is Hanson Robotics misleading opinion on AI, now a global trend
Some view it as narrow AI, while others bracket it in the AGI space but for LeCun this is merely a stunt – “Potemkin AI”, “animatropic puppet” or “Wizard-of-Oz AI” and media organizations are complicit in this scam by promoting it. LeCun’s doubts are well-founded, because for most part of their career, researchers believed that it was hopeless and almost impossible to achieve human-level AI. Does Hanson Robotics Sophia bring us any closer?
From Hanson Robotic’s standpoint, it has more mercenary value, pure publicity for a startup and is a way of documenting progress in AI? But does Sophia bridge the gap between narrow AI & strong AI? No, Hanson Robotics chief scientist and CTO Goertzel conceded that Sophia is not what he would call AGI, but it is cutting-edge in terms of dynamic integration of perception, action and dialogue.
Over 100,000 people subscribe to our newsletter.
See stories of Analytics and AI in your inbox.
On the other hand, Hong Kong-based company Hanson Robotics and Sophia’s creator Dr David Hanson, a 2007 PhD graduate from the University of Texas remonstrated that the company strives earnestly for AGI, believing that bio-inspired robotic embodiment can help AI get smarter and more useful. Here are two contrasting opinions from the company’s founders.
Well, Hanson Robotics latest invention did divide the Twitter world with people falling in two distinct camps a) detractors dismissed it as an AI stunt & buzzword overload b) AGI optimists who believe it is advancing research in AI. Sophia first debuted on the world stage in October 2017 and has since then sparked a lot of conversations around social and ethical concerns in AI, future AI community is systems that interact with human beings and autonomous systems that aren’t trustworthy yet.
Yann LeCun is not alone, AI researchers believe Sophia is just a gimmick
Sophia serves as artwork + platform for research + uses like therapy and education. This statement comes from David Hanson, roboticist who worked at Walt Disney Imagineering, both as sculptor and a technical consultant and he is also the maker of Disney World’s Hall of President’s attraction: An animatronic President Trump that was widely panned by the public. In the case of this humanoid robot, David Hanson made the robot, while Ben Goertzel was tasked with the AI/AGI based on his OpenCog system.
At best, Sophia is described as a chatbot with a face, researchers assert that what human-machine interaction designers have done is link narrow AI algorithms together, to give the functionality of a more capable algorithm. The result is a — speech-reciting robot that can drum up witty conversations with pre-loaded text, follow it up with machine learning to match facial expressions and pauses to the text. According to some experts, Sophia is more like a chatbot with a face. However, Sophia also scores on some counts:
- For eg; the voice recognition technology is better as compared to Siri or Alexa
- Hanson Robotics humanoid robot displays a better dialog understanding system sentences
- Virtual agents like Siri, Alexa, Cortana are designed for simple tasks, not for conversation
- Sophia is akin to a preprogrammed robot that runs chatbot software which can respond to cues with actual facial expressions and scripted answers
Do robots like this really represent huge leaps in AI?
Sophia did grab the attention of mainstream media since bursting onto the global stage, but it doesn’t advance our understanding of AI.
- Firstly, LeCun asserts that in their attempt to build intelligent machines, researchers need to find new theories, new principles, new methods and new algorithms that have applications in the short term and the medium term
- Facebook too has a long-term goal — understand intelligence and building intelligent machines. Building intelligent machines is not just a technological challenge, but a scientific question as well.
- Reproducing intelligence in machines is a primary scientific question. To do that one has to understand the human mind and how the brain works
- LeCun also conceded that Deep Learning in its current form is limited. But in the path towards AGI, human-level AI, deep learning will have to play a bigger role in the solution.
- According to LeCun, the objective of Deep Learning is to present an AI system that learn abstract/high-level/hierarchical representations of the world.
- Can human-level AI be built around the central paradigm of machine learning where the objective is to minimize an objective function. Furthermore, can the minimization be done through gradient-based methods (like stochastic gradient descent where the gradient is computed with backprop). If this paradigm cannot be used, then researchers have to find new paradigms around which to build future algorithms for representation learning.
- LeCun adds that even if we build machines which are super-human intelligent, they will have limited abilities to outsmart us in the real world.
Did Hanson Robotics one-up Facebook in the quest for General AI?
There is no secret that Facebook’s ultimate goal is General AI and researchers at FAIR, led by LeCun are toying with a bunch of experiments such as applications of AI in image and video understanding, text understanding, dialog systems, language translation, speech recognition, text generation, and other domains. The objective is to get learning machine to model their environment, to remember, to reason, and to plan. There is also some buzz around Facebook’s secret lab Building 8 known for working on hard-to-define crazy R&D projects such as building a prosthesis hoisted on the head, which can beam light into the eye and read neurons. The company is also working on new interfaces for AR.