MITB Banner

Now You Can Write One Code That Works On Both PyTorch And Tensorflow

Share

“Library developers no longer need to choose between frameworks.”

The researchers from Tübingen AI Center, Germany, have introduced a new Python framework, ‘EagerPy’ that allows the developers to write code that can work independently of the popular frameworks like PyTorch and TensorFlow. 

In a recently published work on EagerPy, the researchers wrote that library developers no longer watch out for framework dependencies. Their new Python framework, EagerPy takes care of their re-implementation and code duplication hurdles. 

For instance, Foolbox is a Python library that is built on top of EagerPy. The library was rewritten in EagerPy instead of NumPy to achieve native performance on models developed in PyTorch and TensorFlow with one code base and without code duplication. Foolbox is a library for running adversarial attacks against machine learning models.

Importance Of Framework Agnostic Practices

Addressing the differences between frameworks, the authors explored syntactic deviations. In the case of PyTorch, gradients using an in-place requires_grad_() call, and backpropagation is called using backward(). Whereas, TensorFlow offers a high-level manager and functions like tape.gradient to query gradients.  Even at the syntactic level, these two frameworks differ a lot. For example, dim vs axis in case of parameters and sum vs reduce_sum in case of functions.

This is where EagerPy comes into the picture. It resolves the differences between PyTorch and TensorFlow by providing a unified API that transparently maps to various underlying frameworks without the computational overhead. 

“EagerPy lets you write code that automatically works natively with PyTorch, TensorFlow, JAX, and NumPy.”

EagerPy focuses on eager execution and in addition, wrote the researchers, its approach is transparent, and users can combine framework-agnostic EagerPy code with framework-specific code. 

The introduction of eager execution modules by TensorFlow and similar features by PyTorch made eager execution mainstream and the frameworks more similar. However, despite these similarities — between PyTorch and TensorFlow 2 — writing framework-agnostic code is not straightforward. At the semantic level, the APIs for automatic differentiation in these frameworks differ. Know more about eager execution here.

Automatic differentiation refers to algorithmic solving of a differential equation. It works on the principle of chain rule, i.e., solving derivatives of a function can be distilled down to fundamental mathematical operations(addition, subtraction, multiplication and division). These arithmetic operations can be represented in a graph format. EagerPy especially uses a functional approach to automatic differentiation.

Here’s a code snippet from the EagerPy documentation:

import eagerpy as ep

x = ep.astensor(x)

def loss_fn(x):

    # this function takes and returns an EagerPy tensor

    return x.square().sum()

print(loss_fn(x))

# PyTorchTensor(tensor(14.))

print(ep.value_and_grad(loss_fn, x))

First function is defined and then differentiated with respect to its inputs. It is then passed to ep.value_and_grad to evaluate the function and its gradient. 

Also, norm function can now be used with native tensors and arrays from PyTorch, TensorFlow, JAX and NumPy with virtually no overhead compared to native code. It also works with GPU tensors.

import torch

norm(torch.tensor([1., 2., 3.]))

import tensorflow as tf

norm(tf.constant([1., 2., 3.]))

In summary, EagerPy is designed to offer the following features:

  • Provide a unified API for eager execution
  • Maintain the native performance of the frameworks 
  • A fully chainable API and 
  • Comprehensive type checking support.

These attributes claim the researchers, make EagerPy easier and safer to work with than the underlying framework-specific APIs. Despite these changes and improvements, the team behind EagerPy made sure that the EagerPy API follows the standards set by NumPy, PyTorch, and JAX.

Get Started With EagerPy:

Install the latest release from PyPI using pip:

python3 -m pip install eagerpy

import eagerpy as ep

def norm(x):

    x = ep.astensor(x)

    result = x.square().sum().sqrt()

    return result.raw

Know more about EagerPy here

Share
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.