Now Reading
Meet AutoAugment, Google’s New Research Which Builds On AutoML Efforts

Meet AutoAugment, Google’s New Research Which Builds On AutoML Efforts

Google is at an inflection point where it is doubling down on its existing deep learning techniques. For example, the company’s latest research, AutoAugment: Learning Augmentation Policies from Data, explores a reinforcement learning algorithm which increases the amount and diversity of data in an existing training dataset. The latest research tackles deep learning’s biggest hurdle — huge amount of quality data to train models. So, the Mountain View-based company took it upon itself to find ways to automatically augment existing data with machine learning. Building on the results of AutoML, the AI research team has designed neural network architecture and optimisers to replace components of systems which were previously designed by humans. Basically, the team has attempted to automate the procedure of data augmentation.

The paper describes a procedure called data augmentation for images that help in searching for improved data augmentation policies. The key highlight is creating a search space of data augmentation policies, evaluating the quality of a particular policy directly on the dataset of interest. For implementation, the researchers have designed a search space where a policy consists of many sub-policies, one of which is randomly chosen for each image in each mini-batch. A sub-policy consists of two operations, each operation is an image processing function such as translation, rotation, or shearing, and the probabilities and magnitudes with which the functions are applied. “We use a search algorithm to find the best policy such that the neural network yields the highest validation accuracy on a target dataset,” notes the research paper.

According to Google, their the research team score is unlike the earlier state-of-the-art deep learning models which leveraged hand-designed data augmentation policies. In fact, this the team has relied on RL to find the optimal image transformation policies from data. “The result improved performance of computer vision models without relying on the production of new and ever-expanding datasets,” noted the blog.

Understanding Data Augmentation

An aura of healthy scepticism always follows deep learning, and the chorus has been led by researchers like Gary Marcus. One side of the argument is that DL took off because of the computing power that helps power large models. But the amount of training data required to make large models is a major impediment. However, there have been a few breakthroughs in data efficiency as well. For example, RL made a lot of progress in a variety of tasks that do not require large data sets. Similarly, GANs and capsule networks, which are relatively new techniques, work well with less data.

Data augmentation technique is leveraged to teach a model about image invariances in a way that makes a neural network invariant to these important symmetries, thereby improving its performance. The idea behind data augmentation for Google was that images don’t have many symmetries, hence the information present in the image doesn’t change much. Essentially, the algorithm designs custom data augmentation policies for computer vision datasets. From guiding the selection of image transformation to operations such as flipping an image horizontally or vertically, changing its colour and rotating it, the algorithm also goes onto predict what image transformations to combine, the per-image probability and magnitude of the transformation used. The algorithm achieved 83.5 percent top1 accuracy on ImageNet dataset and achieved an error rate of 1.5 percent on CIFAR10.  

See Also

Google’s Deep Learning Efforts

Google has taken significant strides in ML and DL and much of its innovation has come from open source software. The company has also developed commodity hardware and its dominance in DL has helped them gain momentum in mobile and desktop software and innovations. Google’s deep learning techniques are also powering the next web 2.0 features such as Google Maps, Google Translate and automated replies in Gmail. Google’s co-founder Sergey Brin, sais, “Advances in AI helped us understand images in Google Photos, allow Waymo cars to recognise and distinguish objects safely, exponentially improve sound and camera quality in our hardware, understand and produce speech for Google Home; neural networks now translate over 100+ languages in Google Translate, caption over a billion videos in 10 languages on YouTube and improve the efficiency of our data centers.”

Also, advances in computing power and data augmentation have helped drive several technical innovations. Besides pioneering developments in deep learning, Google has also made advancements in reinforcement learning which plays a critical role in AI applications. And RL is good at solving problems which fall outside the realm of unsupervised and supervised ML. In fact, much of the research in self-learning stems from reinforcement learning which is a buzzing topic among AI-focused companies.

Join Our Telegram Group. Be part of an engaging online community. Join Here.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
What's Your Reaction?
In Love
Not Sure

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top