Building thinking machines have been a human obsession since ages, and right through history, we have seen many researchers working on the concept of generating intelligent machines. While neural networks are the most popular form of AI that has been able to accomplish it, ‘symbolic AI’ once played a crucial role in doing so. It was used in IBM Watson to beat human players in Jeopardy in 2011 until it was taken over by neural networks trained by deep learning.
While neural networks have given us many exciting developments, researchers believe that for AI to advance, it must understand not only the ‘what’ but also the ‘why’ and even process the cause-effect relationships.
Sign up for your weekly dose of what's up in emerging technology.
The current deep learning models are flawed in its lack of model interpretability and the need for large amounts of data for learning. This has called for researchers to explore newer avenues in AI, which is the unison of neural networks and symbolic AI techniques.
What Is Neuro-Symbolic AI?
A fancier version of AI that we have known till now, it uses deep learning neural network architectures and combines them with symbolic reasoning techniques. For instance, we have been using neural networks to identify what kind of a shape or colour a particular object has. Applying symbolic reasoning to it can take it a step further to tell more exciting properties about the object such as the area of the object, volume and so on.
Overcoming The Shortfalls Of Neural Networks And Symbolic AI
If we look at human thoughts and reasoning processes, humans use symbols as an essential part of communication, making them intelligent. To make machines work like humans, researchers tried to simulate symbols into them. This symbolic AI was rule-based and involved explicit embedding of human knowledge and behavioural rules into computer programs, making the process cumbersome. It also made systems expensive and became less accurate as more rules were incorporated.
To deal with these challenges, researchers explored a more data-driven approach, which led to the popularity of neural networks. While symbolic AI needed to be fed with every bit of information, neural networks could learn on its own if provided with large datasets. While this was working just fine, as mentioned earlier, the lack of model interpretability and a large amount of data that it needs to keep learning calls for a better system.
To understand it more in-depth, while deep learning is suitable for large-scale pattern recognition, it struggles at capturing compositional and causal structure from data. Whereas symbolic models are good at capturing compositional and causal structure, but they strive to achieve complex correlations.
The shortfall in these two techniques has led to the merging of these two technologies into neuro-symbolic AI, which is more efficient than these two alone. The idea is to merge learning and logic hence making systems smarter. Researchers believe that symbolic AI algorithms will help incorporate common sense reasoning and domain knowledge into deep learning. For instance, while detecting a shape, a neuro-symbolic system would use a neural network’s pattern recognition capabilities to identify objects and symbolic AI’s logic to understand it better.
A neuro-symbolic system, therefore, uses both logic and language processing to answer the question, which is similar to how a human would respond. It is not only more efficient but requires very little training data, unlike neural networks.
IBM and MIT Researchers Are Leading The Way Of Neuro-Symbolic AI
MIT-IBM Watson AI Lab along with researchers from MIT CSAIL, Harvard University and Google DeepMind has developed a new, large-scale video reasoning dataset called, CLEVRER — CoLlision Events for Video REpresentation and Reasoning. According to the paper, it helps AI recognize objects in videos, analyze their movement, and reason about their behaviours.
They used CLEVRER to benchmark the performances of neural networks and neuro-symbolic reasoning by using only a fraction of the data required for traditional deep learning systems. It helped AI not only to understand casual relationships but apply common sense to solve problems.
What did they do?
As per the paper, the researchers used CLEVRER to evaluate the ability of various deep learning models to apply visual reasoning. These deep learning models work on perception-based learning, meaning that they fared well in answering description questions but did poorly on issues based on cause-and-effect relationships.
To overcome this shortcoming, they created and tested a neuro-symbolic dynamic reasoning (NS-DR) model to see if it could succeed where neural networks could not. It used neural networks to recognize objects’ colours, shapes and materials and a symbolic system to understand the physics of their movements as well as the causal relationships between them.
“More specifically, NS-DR first parsed an input video into an abstract, object-based, frame-wise representation that essentially catalogued the objects appearing in the video. Then, a dynamics model learned to infer the motion and dynamic relationships among the different objects. Third, a semantic parser turned each question into a functional program. Finally, a symbolic program executor ran the program, using information about the objects and their relationships to produce an answer to the question,” stated the paper.
Researchers found that NS-DR outperformed the deep learning models significantly across all categories of questions.
The Way Forward
While the complexities of tasks that neural networks can accomplish have reached a new high with GANs, neuro-symbolic AI gives hope in performing more complex tasks. By combining the best of two systems, it can create AI systems which require fewer data and demonstrate common sense, thereby accomplishing more complex tasks.