“Library developers no longer need to choose between frameworks.”
The researchers from Tübingen AI Center, Germany, have introduced a new Python framework, ‘EagerPy’ that allows the developers to write code that can work independently of the popular frameworks like PyTorch and TensorFlow.
In a recently published work on EagerPy, the researchers wrote that library developers no longer watch out for framework dependencies. Their new Python framework, EagerPy takes care of their re-implementation and code duplication hurdles.
For instance, Foolbox is a Python library that is built on top of EagerPy. The library was rewritten in EagerPy instead of NumPy to achieve native performance on models developed in PyTorch and TensorFlow with one code base and without code duplication. Foolbox is a library for running adversarial attacks against machine learning models.
Importance Of Framework Agnostic Practices
Addressing the differences between frameworks, the authors explored syntactic deviations. In the case of PyTorch, gradients using an in-place requires_grad_() call, and backpropagation is called using backward(). Whereas, TensorFlow offers a high-level manager and functions like tape.gradient to query gradients. Even at the syntactic level, these two frameworks differ a lot. For example, dim vs axis in case of parameters and sum vs reduce_sum in case of functions.
This is where EagerPy comes into the picture. It resolves the differences between PyTorch and TensorFlow by providing a unified API that transparently maps to various underlying frameworks without the computational overhead.
“EagerPy lets you write code that automatically works natively with PyTorch, TensorFlow, JAX, and NumPy.”
EagerPy focuses on eager execution and in addition, wrote the researchers, its approach is transparent, and users can combine framework-agnostic EagerPy code with framework-specific code.
The introduction of eager execution modules by TensorFlow and similar features by PyTorch made eager execution mainstream and the frameworks more similar. However, despite these similarities — between PyTorch and TensorFlow 2 — writing framework-agnostic code is not straightforward. At the semantic level, the APIs for automatic differentiation in these frameworks differ. Know more about eager execution here.
Automatic differentiation refers to algorithmic solving of a differential equation. It works on the principle of chain rule, i.e., solving derivatives of a function can be distilled down to fundamental mathematical operations(addition, subtraction, multiplication and division). These arithmetic operations can be represented in a graph format. EagerPy especially uses a functional approach to automatic differentiation.
Here’s a code snippet from the EagerPy documentation:
import eagerpy as ep
x = ep.astensor(x)
# this function takes and returns an EagerPy tensor
First function is defined and then differentiated with respect to its inputs. It is then passed to ep.value_and_grad to evaluate the function and its gradient.
Also, norm function can now be used with native tensors and arrays from PyTorch, TensorFlow, JAX and NumPy with virtually no overhead compared to native code. It also works with GPU tensors.
norm(torch.tensor([1., 2., 3.]))
import tensorflow as tf
norm(tf.constant([1., 2., 3.]))
In summary, EagerPy is designed to offer the following features:
- Provide a unified API for eager execution
- Maintain the native performance of the frameworks
- A fully chainable API and
- Comprehensive type checking support.
These attributes claim the researchers, make EagerPy easier and safer to work with than the underlying framework-specific APIs. Despite these changes and improvements, the team behind EagerPy made sure that the EagerPy API follows the standards set by NumPy, PyTorch, and JAX.
Get Started With EagerPy:
Install the latest release from PyPI using pip:
python3 -m pip install eagerpy
import eagerpy as ep
x = ep.astensor(x)
result = x.square().sum().sqrt()
Know more about EagerPy here