Now Reading
Randomly Wired Neural Networks: A Breakthrough In Neural Architecture Search


Randomly Wired Neural Networks: A Breakthrough In Neural Architecture Search


AI pioneer Frank Rosenblatt in his famous 1958 paper on perceptron, wrote that the physical connections of the nervous system are not identical and at birth, the construction of the most important networks is largely random. This was the early days of computational breakthroughs and the researchers were already hinting that it's okay to be unorganised when it comes to machines.



Today, the success of deep learning approaches owes in large to the enhancements of neural networks over the years. These networks are built on rules which allow them to interact with other nodes within the network. The way a connection is established also has a say in the rate at which the network learns a certain task.

For example, the widely popular models like ResNet, which are usually deployed for transfer learning in an image recognition task, owe their success to the way their wiring pattern. 

The traditional wiring patterns run into problems like non-convexity and large hypothesis class. Though the human-designed wiring like those of convolutional neural networks has shown great reliability over the years, researchers are optimistic about the outcome of inducing some chaos into the black-box.

To change the tide, researchers are now exploring alternatives to neural architecture search (NAS).

Overview Of The Architecture Of Randomly Wired Networks

The above figure shows Randomly wired neural networks generated by the classical Watts-Strogatz (WS) model

In a paper titled “Exploring Randomly Wired Neural Networks for Image Recognition”, the researchers at FacebookAI explored a more diverse set of connectivity patterns with the help of randomly wired neural networks and tried to apply them to image recognition tasks.

The approach of randomly wired networks is based on random graph models in graph theory.

Here the researchers used three classes of graph models: Erd̋os-Renyi (ER), Barabasi-Albert (BA), and Watts-Strogatz (WS).

In the ER model with N nodes, an edge between two nodes is connected with probability P, independent of all other nodes and edges and the BA model generates a random graph by sequentially adding new nodes. 

Whereas, the WS model was defined to generate a small-world graph. In WS model, in a clockwise loop, for every node, the edge that connects the next node is rewired with probability P. “Rewiring” here is uniformly choosing a random node.

Source: HenryAI labs YouTube

Here let's take an example of WS graph algorithm. As can be seen in the figure above, the nodes are connected to random neighbours and few are even disconnected. These nodes contain a convolutional network followed by RELU activation followed by batch normalisation. In a traditional architecture, after the completion of batch-norm in one node, the adjacent node is fired and so on and so forth. Here we see an arrangement where the connections are made randomly.

See Also

Results on ImageNet: regular computation regime with FLOPs comparable to ResNet

The results of the experiments as demonstrated in the paper show that the randomly wired neural networks can outperform human-designed networks such as ResNet and ShuffleNet in image recognition tasks when the computation budget is regular and can go toe to toe with the increased computational regime. The fact that a random generation technique can do decently when pitted against the latest hand-designed and optimised models related to neural architecture search is something that will hopefully be explored in the future.

Future Direction

The concept of random connections has been around for a while. It dates back to as far as the 1940s when Turing suggested a concept of unorganised machines, which is a form of the earliest randomly connected neural networks. This work was followed by another AI pioneer- Marvin Minsky. Minsky implemented a variant of this approach using vacuum tubes, which were randomly wired. 

The objective behind the exploration of this novel approach of randomly wired neural networks is to enable a robust network generator. Researchers insist that these network generators are capable of producing new networks unknown to human researcher and can also give a tough time to the current state-of-the-art models.

From making networks to learn features to automating the task of generating unique networks, the successful demonstration of this work is a witness to the fact that the machine learning community is making a slow transition from network engineering to network generator engineering. 

Watch the following video for more details:



Register for our upcoming events:


Enjoyed this story? Join our Telegram group. And be part of an engaging community.


Our annual ranking of Artificial Intelligence Programs in India for 2019 is out. Check here.

Provide your comments below

comments

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Scroll To Top