Download our Mobile App
Gary Marcus, Professor of Psychology and Neural Science at New York University has been one of the biggest detractors of deep learning – from demystifying AI’s prowess to locking horns with AI’s greats such as professor Yann LeCun over AI’s need for innate machinery & casting serious doubts on the achievements of big tech world (he tore down DeepMind’s AlphaGo achievement in an arXiv paper) –Marcus has been quick to point out the many flaws in Deep Learning. So, it’s no surprise that Marcus, in his new essay, “Deep Learning: A Critical Appraisal” once again gives a reality check on this buzzing field which has made immense progress in fields related to image recognition, speech recognition and computer vision.
AI is divided into two schools of thought
In the world of artificial intelligence, there are two distinct schools of thoughts – modern day deep learning pioneers University of Toronto’s Geoffrey Hinton & Facebook’s Director of AI
Research Yann LeCun who have always couched their models in terms of neurons that draw inspiration from neuroscience. It was Hinton who in 2011/2012 revived the technique by demonstrating how neural networks fed with huge amount of data could give machines new powers of perception. Gradually, other big tech companies like IBM, Microsoft and Facebook realized the commercial potential of deep learning techniques and made considerable progress in AI hardware to power everyday applications.
Marcus – a persistent critic belongs to the other side of the camp
Now, Marcus belongs to the other side of the camp – to the Marvin Minsky school of thought that postulated that a purely neural network-focused approach wasn’t enough to achieve intelligence. Minsky, a pioneering force in AI, propounded in his 1969 book Perceptrons the limitations of nascent neural networks and presented several key challenges in “reaching a deeper understanding of how objects or agents with individuality can emerge in a network.” This was the first work that casted doubt on computational framework that were considered models of the brain. The book, co-authored by Seymour A. Papert also identified new research direction related to connectionism. In his later work, Minsky posited that intelligence cannot stem from one system but from the interactions of numerous simple components, or “agents.”
Now, Marcus, in light of the recent events has couched his arguments on Minsky’s principles – has dubbed systems, especially Deep Mind’s “learning systems” misleading, saying they need to consider the need for innate machinery in a more principled way.
Some of Marcus’s key arguments are:
- Deep learning only works well as an approximation, and the results cannot yet be trusted
- Deep learning cannot differentiate between causation from correlation
- The field is not transparent & doesn’t have a way to deal with hierarchical structures
- It’s acute dependency on huge amount of data and it functions well with fixed datasets
- He pointed out that in image recognition applications, where data may be restricted, deep learning algorithms will not be conceptualize new perspectives
Marcus, also a former head of AI at Uber counts himself amongst the likes of nativists such as Steve Pinker, Elizabeth Spelke is perhaps the biggest critique of Deep Learning. In his latest argument, he emphasizes that this technique, rather than improving incrementally and paving the way for AGI is heading towards a “trough of disillusionment”. He also played down the excitement about research at companies such as DeepMind, saying it wouldn’t lead to revolutionary breakthrough in the short term.
Does Marcus have a vested interest in seeing Deep Learning fail?
Stay ConnectedGet the latest updates and relevant offers by sharing your email.
Marcus’s contrarian viewpoints have of late drawn a lot of criticism from the Deep Learning research community who have dubbed his arguments as misleading and completely wrong, overlooking DL’s achievements in natural language translation and image recognition. According to Carlos E. Perez, author of Artificial Intuition and the Deep Learning Playbook and co-founder of Intuition Machine Inc — Marcus could very well have a vested interest in casting skepticism on the most dominant paradigm of AI and seeing it fail.
Some of Perez’s most notable arguments, pointed out by him in his post are:
- By presenting a conflicting viewpoint, Marcus can influence lesser knowledgeable investors of an alternative path for investment.
- His arguments presented in his latest critique are ambiguous at best and apply to all machine learning algorithms. In other words, Marcus hasn’t presented any new insights and hasn’t broken any new ground in his latest essay.
- His key question – Deep Learning is the wrong approach to move forward isn’t backed by any other promising approach.
- Perez believes Deep Learning is a stepping stone that other cognitive tools will leverage in the future to achieve higher levels of cognition. It is here that Marcus, a cognitive psychologist failed to understand that DL is the wheel of cognition and will pave the way for effective approach to AI.
- Perez also adds that knowledge discovery in the future will be driven by exploration and exploitation. Deep Learning algorithms will continue to improve and refine in the future and short comings pointed out by Marcus are being slowly addressed by the Deep Learning research community with new techniques.
- There is a lot of ongoing research that can overcome present limitations in Deep Learning such as transfer learning, wherein an algorithm trained on one dataset can be applied to a different problem.
Can there be an alternative to Deep Learning?
Well, is there a possibility for a broader AI system that can perform multiple tasks without relying on petabytes of data? Such is the popularity of Deep Learning that other projects are being sidelined in terms of funding and publicity. Case in point is OpenCog – an open source framework for integrated Artificial Intelligence & Artificial General Intelligence (AGI). You can check out the project here.
According to the GitHub page, OpenCog consists of multiple components and it has a (hyper-)graph database, the AtomSpace, used for representing knowledge and algorithms, as a surface on which learning and reasoning algorithms are implemented. While the project doesn’t emulate the brain completely, it draws inspiration from neuroscience, computer science and cognitive psychology.
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
Richa Bhatia is a seasoned journalist with six-years experience in reportage and news coverage and has had stints at Times of India and The Indian Express. She is an avid reader, mum to a feisty two-year-old and loves writing about the next-gen technology that is shaping our world.