Listen to this story
|
AI/ML research is constantly evolving and unpredictable—often there are ‘old-fashioned’ concepts that are turned over and revived in a form entirely different from the way in which it first appeared. It is only a matter of recognising the merit in them and fusing them ingenuously. Even deep learning, the branch of ML that is fashionable now, can be traced back to 1943. As unanticipated as it may seem, areas within the discipline mature steadily, sometimes hit significant breaks in their development and then may resume again later.
I’m glad we all agree that neurosymbolic AI is the way to go. Who proposed it first really doesn’t matter.
— Pedro Domingos (@pmddomingos) September 27, 2022
Symbolic AI and Deep learning: A Happy Marriage
Likewise, a section of scientists had long anticipated the potential in adopting symbolic AI systems so machines can reach human-levels of comprehension. Having grown popular between the 1950s and 1980s, symbolic AI was the first attempt made to build AI. Symbolic AI played upon the human brain’s ability to figure out the world in terms of symbolic interconnections and representations. There’s a set of rules to define the concepts that capture everyday knowledge.

Symbolic models have the ability to grasp compositional and causal knowledge which can pave the way for flexible generalisation of AI models. On the other hand, neural networks in deep learning are able to draw directly from raw data but lack when it comes to having causal and compositional structure which means that they have to be retrained over and over to learn new tasks.
In the last couple of years, experts have suggested an amalgamation of the two to build a new class of AI that can compensate for the other’s weaknesses, called neurosymbolic AI. This new AI method will have neural networks that can extract statistical structures from files of raw data which gives context about images and sound—the deep learning part—along with symbolic representations of problems and logic, the symbolic side.
There are some remarkable benefits around neurosymbolic AI—it doesn’t require troves of training data, a stumbling block that deep learning is struggling with, and also tracks the steps that are required to build inferences for making conclusions.

Gaps in Deep Learning
Since last year, work around how neurosymbolic AI can advance generalisation has caught pace. At ICLR, Brenden Lake, CDS Assistant Professor of Data Science and Professor of Psychology at the NYU Department of Psychology, Reuben Feinman, a PhD student at NYU and Google PhD Fellow presented a paper titled, ‘Learning Task-General Representations with Generative Neuro-Symbolic Modeling’.
The group got together to make a generative neurosymbolic, or GNS model, that learnt conceptual representations from one image in training using probabilistic inference. The model was then able to generalise this training to four other unique tasks.
Generalisation has always been an Achilles heel for deep learning despite the predominance of the branch. Repeated experiments show that, even with the high prediction rates of ANNs or Artificial Neural Networks, the foundation of making inferences may not be the most reasonable.
A 2015 paper by researchers including GAN founder Ian Goodfellow titled, ‘Explaining and Harnessing Adversarial Examples’ demonstrated that even state-of-the-art deep learning networks are often not able to learn how to recognise images in a manner convincing enough to be generalised to different tasks.
Even machines that are adept at playing games, which use deep reinforcement learning, aren’t known to follow generally-applicable principles that can then help them play many other games. For all the superhuman capabilities of these models, for example, DeepMind’s AlphaGo, making the smallest changes to the environment will cause the model to go back to an untrained state. In a paper by Marta Garnelo and Murray Shanahan published in 2019 titled, ‘Reconciling deep learning with symbolic artificial intelligence: Representing objects and relations,’ the research discussed how the marriage between deep learning and symbolic AI could help create a connectionist paradigm.
what is at issue is how intensely we should study neurosymbolic AI as opposed eg to scaling LLMs. that’s a huge research choice, with (i suspect) huge consequence
— Gary Marcus (@GaryMarcus) September 27, 2022
Recent work in Neurosymbolic AI
The approach has even found its way into very recent models like CICERO, an agent announced by Meta AI in November last year. CICERO was the first AI to reach human-level performance at Diplomacy, a strategy-based board game.
Gary Marcus, NYU professor and deep learning critic, spoke about what this meant for the road ahead in AI. He admitted that, while it wasn’t clear how generalisable Cicero was, “Some aspects of Cicero use a neurosymbolic approach to AI, such as the association of messages in language with symbolic representation of actions, the built-in (innate) understanding of dialogue structure, the nature of lying as a phenomenon that modifies the significance of utterances, and so forth.”
“As it turns out, the architecture of Cicero differs profoundly from most of what’s been talked about in recent years in AI.” https://t.co/1kIaouzwV6
— Grady Booch (@Grady_Booch) November 25, 2022
The reason behind the diversion into the neurosymbolic branch may also be an indirect consequence of the current hyperfocus around the two main directions in research—either scaling already huge LLMs or building buzzy generative AI tools. Even with all the extensive work being done in these areas, the AGI dream continues to look distant.
In a paper published in December last year titled, ‘A Semantic Framework for Neural-Symbolic Computing,’ authors Simon Odense and Artur d’Avila Garcez refer to how integrating a semantic framework can help neurosymbolic AI further. The paper offers evidence of the application of such a framework to neural coding which can help analyse neurosymbolic systems. All of this is to just say that neurosymbolic AI is worth more than one shot to smoothen the road to AGI.