Now Reading
Addressing Catastrophic Forgetting In ML With ANML & Meta-Learning

Addressing Catastrophic Forgetting In ML With ANML & Meta-Learning

Sameer Balaganur
Catastrophic Forgetting in ML

What is the one thing neural networks have been battling with over the years? Forgetting. Unlike human beings, where we don’t forget things like riding a bike or the rules of a sport even after getting back to them after many years, neural networks sometimes suffer from forgetting the last tasks it has done upon learning new information, something which is called catastrophic forgetting. 

This catastrophic forgetting prevents the machine learning systems from ‘continual learning which is the ability to remember previous tasks while still learning new things. But all hope is not lost, some systems can still be trained to remember, enter, ANML (a neuromodulated meta-learning algorithm).

What is ANML?

While a lot of work has been done on keeping the machine learning models from catastrophically forgetting the previous knowledge, all of it involves manually-designed solutions to the problem.



Shawn Beaulieu, Lapo Frati, Thomas Miconi, Joel Lehman, Kenneth O. Stanley, Jeff Clune, and Nick Cheney are the authors of a research paper produced learning called Learning Continually Learning. In this paper, they propose ANML, as the solution to the catastrophic forgetting, which was called the Achilles heel of machine learning by one of the authors Jeff Clune.

What the researchers have done is that instead of manually designed solutions, they use meta-learning solutions for catastrophic forgetting where they allow AI to continually learn.

ANML is an attempt by the researchers to get to AGI (Artificial General Intelligence) faster. ANML meta-learns the parameters of a neuromodulation network. Neuromodulation happens inside the brain, where neurons can force other neurons to learn. 

The Meta-learning Approach

The meta-learning approach concentrates on producing an AI algorithm using machine learning, which will continually learn and not forget. This work expands upon OML, where a MAML type meta-learning algorithm produces a set of neural networks that when frozen and used by additional neural networks minimises catastrophic forgetting in those layers. The meta-learning approach chooses what is useful and leaves out the part that is believed to be effective; this ability is what differentiates meta-learning from the manual approach.

Meta-test Training Classification Accuracy

How Meta-Learning Is Used Here

The researchers attempt to meta-learn a context-dependent gating function, which is a neuromodulatory network. This network enables continual learning in another network which is called a prediction network. The neuromodulatory network can turn on and off activations in subsets of the prediction network conditioned on input; this is called selective activation. Selective activation, in turn, enables selective plasticity because the strength of backward gradients is a function of how active neurons were during the (modulated) forward pass, indirectly controlling which subset of the network will learn for each type of input. In other words, a meta-learning approach leverages selective activation and selective plasticity to improve continual learning.

See Also


Exciting AI Researches To Look Up To In 2020

The ANML meta-learns the parameters of a neuromodulatory network that gates the activations of a separate prediction network, creating selective activation, which in turn creates selective plasticity resulting in a reduced catastrophic forgetting.

The ANML doesn’t just turn to learn on and off; it modulates the activation of neurons. ANML has achieved a state of the art results, learning 600 sequential tasks with minimal catastrophic forgetting. The OML was capable of reaching 200 tasks without catastrophic forgetting.

Outlook 

Currently ANML model uses computer vision to recognise different types of human handwriting. In the future, it will be used for more complex tasks. This research provides some evidence to the theory that one should meta-learn solutions to the challenges the AI research poses. If ANML goes on improving, the AI-generating algorithms can be able to learn as much as possible and one day achieve what the researchers call the community’s grandest ambition: Artificial General Intelligence.

Provide your comments below

comments


If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top