Currently employed neural network architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. This is when Neural architecture search, a subset of AutoML, came to the rescue. Neural Architecture Search (NAS) is the process of automating architecture engineering.
Here we list top research works in Neural Architecture Search based on their popularity on Github. These works have set new baselines, resulted in new networks and more.
Efficient Neural Architecture Search
This work proposes Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In this approach, a controller (RNN), trained with policy gradient, learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The authors state that ENAS is fast, delivers strong empirical performances using much fewer GPU-hours than all existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
The Evolved Transformer
In this work, the researchers from Google Brain attempt to apply NAS to search for a better alternative to the Transformer. They first constructed a large search space and then ran an evolutionary architecture search by seeding the initial population with the Transformer. The results showed that the architecture — the Evolved Transformer — demonstrated consistent improvement over the Transformer on four well-established language tasks.
Download our Mobile App
MobileDets
In this work, the authors state to have achieved substantial improvements in the latency-accuracy trade-off by incorporating regular convolutions in the search space and effectively placing them in the network via neural architecture search. This work resulted in a family of object detection models, MobileDets, that achieve state-of-the-art results across mobile accelerators.
Progressive Neural Architecture Search
In this work, the authors propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. This approach uses a sequential model-based optimisation (SMBO) strategy. Results show that this method is up to 5 times more efficient than the popular RL methods in terms of the number of models evaluated, and 8 times faster in terms of total compute.
DARTS
This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Instead of applying evolution or reinforcement learning over a discrete and non-differentiable search space, this method is based on the continuous relaxation of the architecture representation, allowing an efficient search of the architecture using gradient descent. Experimental results show that this algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modelling.
MorphNet
With MorphNet, the researchers aim to automate the design of neural network structures. MorphNet iteratively shrinks and expands a network. In contrast to previous approaches, MorphNet is scalable to large networks, adaptable to specific resource constraints and capable of increasing the network’s performance. When applied to standard network architectures on a wide variety of datasets, this approach discovers novel structures in each domain, obtaining higher performance.
Neural Architecture Search Without Labels
This paper tries to explore the most pressing question: can we do away with human annotations of images? Can we find high-quality neural architectures using only images? To answer this, the researchers first defined a new setup called Unsupervised Neural Architecture Search (UnNAS). They then trained a large number of diverse architectures with either supervised or unsupervised objectives and found that the architecture rankings produced with and without labels are highly correlated. The results reveal that labels are not necessary, and the image statistics alone may be sufficient to identify good neural architectures.
SpineNet
This paper states that convolutional neural networks do not perform well for tasks requiring simultaneous recognition and localisation. The encoder-decoder architectures are proposed to resolve this by applying a decoder network onto a backbone model designed for classification tasks. In this work, the authors argue encoder-decoder architecture is ineffective. So, they propose SpineNet, a backbone with scale-permuted intermediate features and cross-scale connections that is learned on an object detection task by Neural Architecture Search. Using similar building blocks, SpineNet models outperform ResNet-FPN models.
Randomly Wired Neural Networks
The researchers at Facebook explored a more diverse set of connectivity patterns with Randomly Wired Neural Networks for Image Recognition. Randomly wired networks are founded on random graph models in graph theory.
According to the authors, randomly wired neural networks managed to outperform human-designed networks such as ResNet and ShuffleNet with a typical computation budget and can go toe to toe with the increased computational regime.
To keep yourself up to date with the latest developments in NAS, check this.