MITB Banner

Why DeepMind Researchers Are Betting On Neural Algorithmic Reasoning

Share

Trying to predict algorithmic inputs from data leads to a bottleneck, thanks to the uncertainties of the real world. Real-time data can be noisy. This noisy raw data has to be converted into scalar values (read: weights of neural network). The algorithm then uses these values to estimate output. In low data set-ups, such techniques go haywire. Figuring out the right scalars can be challenging. So how do you escape this algorithmic bottleneck?

According to researchers at DeepMind, neural networks should consistently produce high-dimensional representations to beat the bottleneck. This means the computations of the algorithms must be made to operate over high-dimensional spaces. “The most straightforward way to achieve this is replacing the algorithm itself with a neural network—one which mimics the algorithm’s operations in this latent space, such that the desirable outputs are decodable from those latents,” explained the researchers.

The researchers underline the recent resurge in algorithmic reasoning, which offers ways in which algorithmically-inspired neural networks can be built, primarily through learning to execute the algorithm from abstractified inputs.

Algorithmic reasoning, added researchers, provides methods to train useful processor networks, such that within their parameters, a combinatorial algorithm can be found that is not vulnerable to bottleneck phenomena. Such an algorithm would feature the following:

(a) Aligns with the computations of the target algorithm;

(b) Operates by matrix multiplications, hence natively admits useful gradients

(c) Operates over high-dimensional latent spaces.

(Source: DeepMind)

Research

In a paper titled “Neural Algorithmic Reasoning”, the researchers at DeepMind laid out a blueprint of neural algorithmic reasoning (NAR). To demonstrate NAR, they assumed that the real-world problem requires learning a mapping from natural inputs, x, to natural outputs, y. For example, in the case of a real-time traffic scenario, the natural inputs are often likely high-dimensional, noisy, and prone to changing rapidly. “Further, we assume that solving our problem would benefit from applying an algorithm, A—however, A only operates over abstract inputs, x ̄,” they added.

In this work, the researchers demonstrated an elegant neural end-to-end pipeline that goes straight from raw inputs to general outputs, while emulating an algorithm internally. The general procedure for applying an algorithm A (which admits abstract inputs x ̄) to raw inputs x.

According to the researchers at DeepMind, the blueprint for NAR looks like this:

  • First, learn an algorithmic reasoner for A, by learning to execute it on synthetically generated inputs, x ̄. This yields functions f, P, g such that g(P (f (x ̄))) ≈ A(x ̄). f and g are encoder/decoder functions, designed to carry data to and from the latent space of P (the processor network).
  • Set up appropriate encoder and decoder neural networks, f ̃ and g ̃, to process raw data and produce desirable outputs. The encoder should produce embeddings that correspond to the input dimension of P, while the decoder should operate over input embeddings that correspond to the output dimension of P.
  • Swap out f and g for f ̃ and g ̃, and learn their parameters by gradient ̃descent on any differentiable loss function that compares g ̃(P(f(x))) to ground-truth outputs,y. 
  • The parameters of P should be kept frozen. Through this pipeline, neural algorithmic reasoning offers a strong approach to ̃ applying algorithms on natural inputs. The raw encoder function, f, has the potential to replace the human feature engineer, as it is learning how to map raw inputs onto the algorithmic input space for P , purely by backpropagation.

The researchers believe the algorithmic reasoner, as described in their work, can be bootstrapped from existing algorithms in supervision of their internal workings, and then subsequently embedded into the real world input/outputs via separately trained encoding/decoding networks. This technique especially has come to fruition in domains such as reinforcement learning and genome assembly. And, neural algorithmic reasoning fits well into other machine learning challenges as it allows for the application of classical algorithms on inputs and, in a way, uniting the theoretical advantages with that of a designer’s intent.

Share
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.