Now Reading
Top Works In Neural Architecture Search

Top Works In Neural Architecture Search

W3Schools

Currently employed neural network architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. This is when Neural architecture search, a subset of AutoML, came to the rescue. Neural Architecture Search (NAS) is the process of automating architecture engineering.

Source: automl.org

Here we list top research works in Neural Architecture Search based on their popularity on Github. These works have set new baselines, resulted in new networks and more.

This work proposes Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In this approach, a controller (RNN), trained with policy gradient, learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The authors state that ENAS is fast, delivers strong empirical performances using much fewer GPU-hours than all existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. 



Link to the paper.

The Evolved Transformer

In this work, the researchers from Google Brain attempt to apply NAS to search for a better alternative to the Transformer. They first constructed a large search space and then ran an evolutionary architecture search by seeding the initial population with the Transformer. The results showed that the architecture — the Evolved Transformer — demonstrated consistent improvement over the Transformer on four well-established language tasks.

Link to the paper.

MobileDets

In this work, the authors state to have achieved substantial improvements in the latency-accuracy trade-off by incorporating regular convolutions in the search space and effectively placing them in the network via neural architecture search. This work resulted in a family of object detection models, MobileDets, that achieve state-of-the-art results across mobile accelerators.

Link to the paper

In this work, the authors propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. This approach uses a sequential model-based optimisation (SMBO) strategy. Results show that this method is up to 5 times more efficient than the popular RL methods in terms of the number of models evaluated, and 8 times faster in terms of total compute.

Link to the paper.

DARTS

This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Instead of applying evolution or reinforcement learning over a discrete and non-differentiable search space, this method is based on the continuous relaxation of the architecture representation, allowing an efficient search of the architecture using gradient descent. Experimental results show that this algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modelling.

Link to the paper.

MorphNet

With MorphNet, the researchers aim to automate the design of neural network structures. MorphNet iteratively shrinks and expands a network. In contrast to previous approaches, MorphNet is scalable to large networks, adaptable to specific resource constraints and capable of increasing the network’s performance. When applied to standard network architectures on a wide variety of datasets, this approach discovers novel structures in each domain, obtaining higher performance.

Link to the paper.

Neural Architecture Search Without Labels

This paper tries to explore the most pressing question: can we do away with human annotations of images? Can we find high-quality neural architectures using only images? To answer this, the researchers first defined a new setup called Unsupervised Neural Architecture Search (UnNAS). They then trained a large number of diverse architectures with either supervised or unsupervised objectives and found that the architecture rankings produced with and without labels are highly correlated. The results reveal that labels are not necessary, and the image statistics alone may be sufficient to identify good neural architectures.

See Also

Link to the paper.

SpineNet

This paper states that convolutional neural networks do not perform well for tasks requiring simultaneous recognition and localisation. The encoder-decoder architectures are proposed to resolve this by applying a decoder network onto a backbone model designed for classification tasks. In this work, the authors argue encoder-decoder architecture is ineffective. So, they propose SpineNet, a backbone with scale-permuted intermediate features and cross-scale connections that is learned on an object detection task by Neural Architecture Search. Using similar building blocks, SpineNet models outperform ResNet-FPN models.

Link to the paper.

Randomly Wired Neural Networks

The researchers at Facebook explored a more diverse set of connectivity patterns with Randomly Wired Neural Networks for Image Recognition. Randomly wired networks are founded on random graph models in graph theory.

According to the authors, randomly wired neural networks managed to outperform human-designed networks such as ResNet and ShuffleNet with a typical computation budget and can go toe to toe with the increased computational regime. 

Read more about this here.

To keep yourself up to date with the latest developments in NAS, check this.

What Do You Think?

If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top