The lack of labelled data is one of the peskiest challenges in machine learning. A classifier that is put to identify spam from proper mails, cats from dogs or any other classifying tasks need to be fed with appropriate annotated data for accurate decision making.
However, this is not the case always; the real-world problems that ML models are tasked with solving come with uncertainties and deficiencies. So, keeping the model updated, in other words, making the model smarter even with incoming unknown data is a challenge.
Active learning is an ongoing active research sub-domain within deep learning space that is developed to help model more accurate decisions.
Active learning involves selecting unlabeled data items to the label to best improve an existing classifier.
Where Can it Be Put To Use
It takes autonomous vehicles nearly 11 billion miles of driving to perform just 20% better than a human. If those numbers do not surprise you enough, think about the lack of data and the problems it will create.
According to Nvidia, if humans were to label the validation data, the 100-car fleet driving just eight hours a day would require more than 1 million labellers!
Because you can’t just dump in any driving data and sit back, the model needs to be prepared for all potential adversities that are going to happen on the road.
The automation of the selection process becomes with active learning because
- It starts by training a dedicated deep neural network on already-labelled data.
- The network then sorts through unlabeled data, selecting frames that it doesn’t recognise,
- gets data that otherwise would be challenging to the autonomous vehicle algorithm.
Active learning has already shown it can improve the detection accuracy of self-driving DNNs over manual curation. For instance, Nvidia’s research team has found a three times increase in precision when training with active learning data for pedestrian detection.
What’s Next For Active Learning
Active learning is still being heavily researched for finding uses in convolutional neural networks(CNNs )and LSTMS.
Here are a few tools that promote active learning:
- modAL is an active learning framework for Python3, designed to create active learning workflows with nearly complete freedom rapidly.
- Prodigy is a Python library that has a wide range of pre-built workflows and command-line commands for various tasks, and well-documented components for implementing one’s workflow scripts. It offers a modern annotation tool to inspect and clean data and develop rule-based systems to use in combination with statistical models.
- NEXT is a machine learning system that runs in the cloud and makes it easy to develop, evaluate, and apply active learning in the real-world.
There is also research being done on implementing Generative Adversarial Networks (GANs) into the active learning framework. With the increasing interest in deep reinforcement learning, researchers are trying to reframe active learning as a reinforcement learning problem.
Here are a few other interesting works in this space:
- Combining Active Learning and Federated Learning
This work presents a new centralised distributed learning algorithm that relies on the learning paradigms of Active Learning and Federated Learning to offer a communication-efficient method.
- Fair Active Learning
This paper explores the challenge of bias in machine learning models. The authors e introduce fair active learning (FAL) as a resolution. Considering a limited labelling budget, FAL carefully selects data points to be labelled to balance the model performance and fairness.
- Adversarial Representations In Active Learning
In this work, the authors demonstrate how to use recent advances in deep generative models, to outperform the state-of-the-art in achieving the highest classification accuracy using as few labels as possible.
Active learning, along with transfer learning and federated learning, will be the hottest spaces to watch out for as the researchers tweak the AI infrastructure for efficient deployment and management massive amounts of data in parallel.