Recently, a team of researchers from MIT, Institute of Science and Technology Austria (IST Austria) and Technische Universität Wien (TU Wien) developed an AI system by combining brain-inspired neural computation principles and scalable deep learning architectures.
The AI system is basically a brain-inspired intelligent agent that learns to control an autonomous vehicle directly from its camera inputs. The researchers discovered that a single algorithm with 19 control neurons, connecting 32 encapsulated input features to outputs by 253 synapses, learns to map high-dimensional inputs into steering commands.
Sign up for your weekly dose of what's up in emerging technology.
One of the interesting facts of this research is that the AI agent is inspired by the neural computations known to happen in biological brains in order to achieve a remarkable degree of controllability. They took the inspiration from animals as small as the roundworms. The researchers tried to imprint their ability to perform, such as the ability to perform locomotion, navigation, among others.
In recent years, techniques like deep learning have gained much traction among researchers and organisations due to their efficiency of results in complex applications. According to the researchers, although deep learning algorithms have achieved noteworthy successes in various high-dimensional tasks, yet these algorithms face various representation-learning challenges. The researchers advanced towards the development of a single, task-specific algorithm that universally satisfies the representation-learning challenges.
Furthermore, in the case of self-driving vehicles, while learned vehicle control agents often show great performance in offline testing and simulations, this considerably degrades during live driving. One of the key challenges in autonomous driving vehicles is the ability to learn the end-to-end control.
To mitigate the various issues, the team of researchers developed the AI system with mainly two components. The two components are cameras and the brain-inspired intelligent agent.
Behind the AI System
For the AI system, the researchers developed compact representations called Neural Circuit Policies (NCPs), where each neuron has increased computational capabilities compared with contemporary deep models.
In this process, the challenges of representation-learning have been considered as the main criteria for assessing the performance of autonomous-control agents. Developed to address the representation-learning challenges and the complexity of autonomous lane-keeping, NCP is an end-to-end learning system that perceives the inputs by a set of convolutional layers, extracts image features and performs control by an RNN structure.
At the core, NCPs possess a nonlinear time-varying synaptic transmission mechanism that improves their expressive power in modelling time series, compared with their deep learning counterparts. The foundational neural building blocks of NCPs are called liquid time constant (LTC) networks.
The Roundworm Inspiration
As mentioned earlier, the network structure of NCPs is inspired by the wiring diagram of the roundworms. According to the researchers, the wiring diagram of roundworms achieves a sparsity of around 90%, with predominantly feedforward connections from sensors to intermediate neurons, highly recurrent connections among inter-neurons and command neurons, and feedforward connections from command neurons to motor neurons.
This specific topology was shown to have attractive computational advantages, such as efficient distributed control, requiring a small number of neurons, hierarchical temporal dynamics, robot-learning capabilities and maximal information propagation in sparse-flow networks.
Neural Circuit Policies (NCPs)
According to the researchers, a full-stack NCP network is 63 times smaller than the convolution neural network that established the state-of-the-art of end-to-end driving. Also, the control network of NCP is 970 times sparser than that of Long Short Term Memory (LSTM) and 241 times sparser than that of CT-RNN.
Furthermore, the RNN compartment of an NCP possesses 233 times smaller trainable parameter space than that of LSTM and 59 times lower than CT-RNN. This model has the capability to improve the performance and transparency of the black-box as well as proficiently control a vehicle on previously unseen roads.
The AI system includes compact neural controllers for task-specific compartments of a full-stack autonomous vehicle control system. The researchers showed that the NCPs lead to sparse networks that are more easily interpretable and can be demonstrated easily in the context of autonomous driving. Also, the performance achieved by such a compact neural representation is superior to that of other models in multiple aspects of an ideal autonomous mobile robot controller.
That is, the brain-inspired neural models in combination with compact convolutional neural networks (CNNs) have achieved superior performance, compared with state-of-the-art models, in learning how to steer a vehicle directly from high-dimensional inputs.
Advantages of NCPs For Autonomous Vehicles
- NCPs are highly compact task-specific neural network agents that can proficiently control a vehicle on previously unseen roads, while at the same time being robust to input artefacts, learning short-term causal representations and realising interpretable dynamics.
- NCPs can be beneficially used within full-stack autonomous vehicle frameworks.
- They are designed to improve the performance and transparency of the black-box, task-specific compartments of such complex full-stack autonomous vehicle systems.
Read the paper here.