Artificial Intelligence has taken much inspiration from biology and nature. Geoffrey Hinton, the British cognitive psychologist and computer scientist, also cited the human brain as the primary motivator for his work in neural networks and the backpropagation of error algorithm.
However, many experts are of the firm opinion that it is very hard to implement the back-propagation of error algorithm in the real human brain. The huge success of deep learning in recent years has prompted many experts to look more closely into the brain to see if any other ideas can be ported back into the real world. There are explorations into how the real human brain can be a playground for a grand back-propagation algorithm experiment.
Now a team led by Hinton and Sergey Bartunov along with Adam Santoro, Blake A Richards and Timothy P Lillicrap have presented research that delves into how one can scale deep learning systems that are largely biologically-motivated. The researchers presented the performance and the results on the MNIST, CIFAR10, and ImageNet datasets. The researchers set out to explore many varieties of algorithms such as target-propagation (TP) and feedback alignment (FA). One of the aims of the research is also to establish baselines that will be very important for biologically-motivated deep learning systems going forward. The results and implementation details laid out by the team will act as a base for many researchers.
Discussions And Debate Over The Backpropagation Algorithm
Even though the backpropagation algorithm has been the method that underlies most of the advances we are seeing in the AI field today, there is very less evidence that the brain performs backpropagation. Montreal Institute for Learning Algorithms (MILA) found out that backpropagation is purely linear, whereas biological neurons interleave linear as well as non-linear operations. Hinton is now very sceptical about the algorithm and went on to say that he wants to throw away his old beliefs and “start again.”
The debate around the algorithm is nothing new. There have been some weak objections about BP which were based on the design of the structure of artificial neural networks. According to the paper, more serious concerns are:
- “The need for the feedback connections carrying the gradient to have the same weights as the corresponding feedforward connections”
- “The need for a distinct form of information propagation (error feedback) that does not influence neural activity, and hence does not conform to known biological feedback mechanisms underlying neural communication.”
Scaling Learning Algorithms
The researchers observed that humans learn tasks which are not related to evolution. They therefore deduced that the human brain has a very generalised learning and very powerful algorithm for shaping behaviour. There are many algorithms that are somehow based on the biological phenomena in the brain. There are three major concerns that are stated by the researchers:
- The only algorithm, target propagation (TP) explored empirically depends on gradient computation via backpropagation.
- The algorithms have not been rigorously tested on datasets more difficult than MNIST.
- The algorithms have not been used in modern architectures that are way more complicated than old multi-layer perceptrons (MLPs).
The researchers said that the second point is the most worrying one. It is the most important aspect because it may tell us about whether the approach may work in a brain-like scenario. Accuracy and performance on a very small number of ML tasks with a model that does not have adaptive neural phenomena such as the varieties of plasticity, evolutionary priors, etc tell us about the suitability of an algorithm as well.
It tells us how important it is that researchers invent and discover algorithms that are biologically plausible and also scale on large datasets. Having a great and powerful learning algorithm is really important. The research activity proposed by them therefore focuses on:
- The sufficiency of a learning algorithm
- The impact of biological constraints in a network
Experiments And Results
The researchers wanted to understand the limits and constraints of some biologically-inspired algorithms. They manually searched for architectures well suited to such algorithms and went on to tweak those architectures for BP and FA (feedback alignment algorithm) versions and ran independent hyperparameter searches for each learning method.
Recent research has put an onus on using back-propagation algorithm to understand the brain which was lost in some of the last decades. In this study, the researchers studied TP and FA and composed and present a simple version of the DTP algorithm that didn’t need gradient propagation and weight transport. The researchers demonstrated that networks trained with SDTP without any weight sharing, i.e. weight transport in the backward pass or weight tying in convolutions, perform much worse than DTP — because of impoverished output targets.
Join Our Telegram Group. Be part of an engaging online community. Join Here.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
As a thorough data geek, most of Abhijeet's day is spent in building and writing about intelligent systems. He also has deep interests in philosophy, economics and literature.