MITB Banner

DeepMind Introduces Two New Neural Network Verification Algorithms & A Library

Share

Machine learning is mostly about building good function approximations from data. And, when it comes to good approximations, deep learning algorithms have a great following as they are founded on principles of universal approximation. The adoption of these deep neural networks has been high in the past couple of years. But, as these systems scale, new challenges surface. It can be misclassification of an unexamined vulnerability to adversarial attack. There haven’t been many robust verification techniques so far. 

Neural network verification is currently an ongoing research challenge. To address this, the researchers at Alphabet’s DeepMind have introduced two new verification algorithms and a quick to launch verification library in JAX for developers. We will discuss these algorithms in the next section.

DeepMind’s Push For Better Verification

Source: DeepMind

“There is often a trade-off between the computational efficiency of the verification algorithm and its ‘tightness’.”

DeepMind considers neural network verification to be a powerful technology as it offers the promise of provable guarantees on networks satisfying desirable properties or specifications. The work that has been done with respect to verification algorithms though guarantees that a certain property is true if they are returned successfully, but might fail to verify properties that are true. 

In these verification algorithms, according to the researchers, often comes with a trade-off between the computational efficiency of the verification algorithm and its “tightness”, i.e. the gap between properties that are known to be true and properties that the verification method is able to verify to be true.

The following algorithms attempt to address these trade-offs and enable computationally efficient verification algorithms:

Efficient non-convex reformulations

In this work, the researchers developed a novel non-convex reformulation of convex relaxations of neural network verification. Neural network verification algorithms are usually derived from convex relations. But convex optimisation leads to incomplete verification. The authors also stated that convex optimisation does not scale efficiently to modern neural networks. 

The authors accepted the presence of the nonconvexity and yet derived algorithms that are guaranteed to converge quickly to the global optimum. The non-convex reformulation explained the authors results in an optimisation problem with only bound constraints that are simple to project onto. This, in turn, enables solving relaxations by simple projected gradient style methods. The result is several orders of magnitude speedups relative to alternative solvers while maintaining the tightness of relaxations.

Memory-Efficient first-order semidefinite programming 

The capability of semidefinite programming (SDP) to capture the relationships between neurons makes it the right choice of building tighter verification algorithms. However, the memory and computational overhead get high as we scale, making them impractical to use beyond small, fully connected neural networks. 

With memory-efficient first order SDP, the authors try to exploit the well-known reformulations of SDPs as eigenvalue optimisation problems. They then couple these with iterative methods for eigenvector computation. This leads to an algorithm, explained the authors, with per-iteration complexity while preserving the tightness of relaxations in semidefinite programming. According to the experimental results, the authors claim that this approach can lead to scalable and tight verification of networks trained without the need for special regularisers to promote verifiability.

Along with these two algorithms, the DeepMind team also released a JAX library for verification algorithms.  Jax_verify is a library containing JAX implementations of many widely-used neural network verification techniques. JAX is widely used for automatic differentiation with regards to high-performance machine learning research.

Jax_verify, said DeepMind, is built to be easy-to-use and general thanks to JAX’s powerful program transformation system, which allows one to analyse general network structures and define corresponding functions for calculating verified bounds for these networks. These verification techniques are all exposed through a simple and unified interface.

DeepMind researchers believe that their work on verification algorithms has the potential to lead to new scalable algorithms for verifying properties of neural networks and solving certain kinds of structured regression problems. “They can have an impact in terms of better methods to evaluate the reliability and trustworthiness of state of the art deep learning systems, thereby catching any unseen failure modes and preventing undesirable consequences of deep learning models.” 

Know more here.

Share
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.