Now Reading
Google Open Sources TensorNetwork , A Library For Faster ML And Physics Tasks


Google Open Sources TensorNetwork , A Library For Faster ML And Physics Tasks


“Every evolving intelligence will eventually encounter certain very special ideas – e.g., about arithmetic, causal reasoning and economics–because these particular ideas are very much simpler than other ideas with similar uses,” said the AI maverick Marvin Minsky four decades ago.



Mathematics as a tool to interpret nature’s most confounding problems from molecular biology to quantum mechanics has so far been successful. Though there aren’t any complete answers to these problems, the techniques within domain help throw some light on the obscure corners of reality.  

These complex problems of time and space require long hours of simulation or super fast computers and sometimes both. The modern-day algorithmic approach has taken over a multitude of domains and continues to do so. Machine Learning algorithms have been deployed in tracking exoplanets and at CERN for tasks such as track pattern recognition, particle identification, online real-time processing (triggers) and search for very rare decays.

But understanding the complexity of quantum systems is difficult. The number of quantum states in these systems is exponentially large, making brute-force computation infeasible. To deal with this, data structures called tensor networks are used.

Tensors are multidimensional arrays, categorized in a hierarchy according to their order: e.g., an ordinary number is a tensor of order zero (also known as a scalar), a vector is an order-one tensor, a matrix is an order-two tensor, and so on.

For example, a vector representing an object’s velocity through space would be a three-dimensional, order-one tensor.

We don’t see much of  tensor networks from widespread use in the machine learning community due to the following reasons:

  1. a production-level tensor network library for accelerated hardware has not been available to run tensor network algorithms at scale, and
  2. most of the tensor network literature is geared toward physics applications and creates the false impression that expertise in quantum mechanics is required to understand the algorithms.

To address the above challenges, Google introduces TensorNetwork, an open source library for ease of computation in advanced domains like particle physics. This library was developed in collaboration with the Perimeter Institute for Theoretical Physics and X, a company that tackles the world’s hardest problems.

Significance Of TensorNetwork Representation

Diagrammatic notation for tensors via Google AI

The graphical ways are on the way of encoding the pattern of tensor contractions of several constituent tensors to form a new one. Each constituent tensor has an order determined by its own number of legs. Legs that are connected, forming an edge in the diagram, represent contraction, while the number of remaining dangling legs determines the order of the resultant tensor.   

TensorNetwork uses TensorFlow as a backend and is optimized for GPU processing, which can enable speedups of up to 100x when compared to work on a CPU. TensorFlow’s machine learning platform has a comprehensive, flexible ecosystem of tools, libraries and community resources. This lets researchers push the state-of-the-art developments in ML and developers easily build and deploy ML-powered applications.

In the above example, a closed network represents a scalar and the right side with 3 dangling legs, a tensor.

See Also

The benefit of representing tensors in this way is to succinctly encode mathematical operations, e.g., multiplying a matrix by a vector to produce another vector, or multiplying two vectors to make a scalar.

Where Does TensorNetwork Come Into Play

This open source library was developed to solve deep neural networks on tasks like image processing and also complex physics problems dealing with quantum mechanics.

For example, an image classification tasks, a single pixel of a single image can be one-hot-encoded into a two-dimensional vector, and by combining these pixel encodings together a 2N-dimensional one-hot encoding of the entire image can be made. High-dimensional vector can be reshaped into an order-N tensor, and then add up all of the tensors in the collection of images to get a total tensor encapsulating the collection.

Encoding images with about 50 pixels in this way would already take petabytes of memory. That’s where tensor networks come in.

TensorNetwork is optimized for GPU processing and can achieve speedups of up to 100x when compared to that of a CPU. The team behind TensorNetwork plans to release a series of papers which gives the users an overview of the workings of the library and its API. These papers will also illustrate a few real-world use cases and are aimed at assisting both machine learning engineers and physicists with the help of the open source community.

Know more about TensorNetwork here



Register for our upcoming events:


Enjoyed this story? Join our Telegram group. And be part of an engaging community.

Provide your comments below

comments

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Scroll To Top