Now Reading
DeepMind Introduces Two New Neural Network Verification Algorithms & A Library

DeepMind Introduces Two New Neural Network Verification Algorithms & A Library

Ram Sagar

Machine learning is mostly about building good function approximations from data. And, when it comes to good approximations, deep learning algorithms have a great following as they are founded on principles of universal approximation. The adoption of these deep neural networks has been high in the past couple of years. But, as these systems scale, new challenges surface. It can be misclassification of an unexamined vulnerability to adversarial attack. There haven’t been many robust verification techniques so far. 

Neural network verification is currently an ongoing research challenge. To address this, the researchers at Alphabet’s DeepMind have introduced two new verification algorithms and a quick to launch verification library in JAX for developers. We will discuss these algorithms in the next section.

DeepMind’s Push For Better Verification

Source: DeepMind

“There is often a trade-off between the computational efficiency of the verification algorithm and its ‘tightness’.”

DeepMind considers neural network verification to be a powerful technology as it offers the promise of provable guarantees on networks satisfying desirable properties or specifications. The work that has been done with respect to verification algorithms though guarantees that a certain property is true if they are returned successfully, but might fail to verify properties that are true. 

In these verification algorithms, according to the researchers, often comes with a trade-off between the computational efficiency of the verification algorithm and its “tightness”, i.e. the gap between properties that are known to be true and properties that the verification method is able to verify to be true.

The following algorithms attempt to address these trade-offs and enable computationally efficient verification algorithms:

Efficient non-convex reformulations

In this work, the researchers developed a novel non-convex reformulation of convex relaxations of neural network verification. Neural network verification algorithms are usually derived from convex relations. But convex optimisation leads to incomplete verification. The authors also stated that convex optimisation does not scale efficiently to modern neural networks. 

The authors accepted the presence of the nonconvexity and yet derived algorithms that are guaranteed to converge quickly to the global optimum. The non-convex reformulation explained the authors results in an optimisation problem with only bound constraints that are simple to project onto. This, in turn, enables solving relaxations by simple projected gradient style methods. The result is several orders of magnitude speedups relative to alternative solvers while maintaining the tightness of relaxations.

Memory-Efficient first-order semidefinite programming 

The capability of semidefinite programming (SDP) to capture the relationships between neurons makes it the right choice of building tighter verification algorithms. However, the memory and computational overhead get high as we scale, making them impractical to use beyond small, fully connected neural networks. 

See Also

With memory-efficient first order SDP, the authors try to exploit the well-known reformulations of SDPs as eigenvalue optimisation problems. They then couple these with iterative methods for eigenvector computation. This leads to an algorithm, explained the authors, with per-iteration complexity while preserving the tightness of relaxations in semidefinite programming. According to the experimental results, the authors claim that this approach can lead to scalable and tight verification of networks trained without the need for special regularisers to promote verifiability.

Along with these two algorithms, the DeepMind team also released a JAX library for verification algorithms.  Jax_verify is a library containing JAX implementations of many widely-used neural network verification techniques. JAX is widely used for automatic differentiation with regards to high-performance machine learning research.

Jax_verify, said DeepMind, is built to be easy-to-use and general thanks to JAX’s powerful program transformation system, which allows one to analyse general network structures and define corresponding functions for calculating verified bounds for these networks. These verification techniques are all exposed through a simple and unified interface.

DeepMind researchers believe that their work on verification algorithms has the potential to lead to new scalable algorithms for verifying properties of neural networks and solving certain kinds of structured regression problems. “They can have an impact in terms of better methods to evaluate the reliability and trustworthiness of state of the art deep learning systems, thereby catching any unseen failure modes and preventing undesirable consequences of deep learning models.” 

Know more here.

What Do You Think?

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
What's Your Reaction?
In Love
Not Sure

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top