The TensorFlow Team at Google AI has been tirelessly researching on making enhancements and updates to its popular machine learning platform, TensorFlow. The developers at the tech giant have now released the upgraded version of this platform, TensorFlow 2.2.0.
TensorFlow 2.2.0 includes multiple numbers of changes and bug fixes in order to make the library more productive. The current TensorFlow release now requires the gast version 0.3.3. Previously, in a post, the TensorFlow team announced that it will stop supporting Python 2 when upstream support ends in 2020, and in turn, they will be able to take advantage of new features in the current version of the Python language and standard library.
TensorFlow 2.2.0 has been released!
— TensorFlow (@TensorFlow) May 8, 2020
Check out the release notes for major features and improvements such as a new Profiler for TF 2 for CPU/GPU/TPU, updates on tf.distribute, tf.keras, and more.
Learn more here ↓ https://t.co/gVmsqgZZxV
They stated, “After January 1, 2020, we will not distribute binaries for Python 2, and we will not require Python 2 compatibility for changes to the codebase. It is likely that TensorFlow will not work with Python 2 in 2020 and beyond.”
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
With this new update, the developers also released TensorFlow’s Docker Images that provide Python 3 exclusively. They further mentioned that since all the docker images will now use Python 3, the Docker tags containing -py3 will no longer be working. Also, the existing -py3 tags like latest-py3 will not be updated further.
Here, we mentioned some of the major features and improvements that are being updated in TensorFlow 2.2.0-
TensorFlow Docker Images
The TensorFlow Docker images are based on TensorFlow‘s official Python binaries, which require a CPU with AVX support. Starting on April 2, 2020, the developers stopped publishing the duplicate-py3 images.
New Profiler for TF 2
In TensorFlow 2.2, a new Profiler for TF 2 for CPU/GPU/TPU has been included that offers both device and host performance analysis, including input pipeline and TF Ops. Using the TensorFlow Profiler to profile the execution of your TensorFlow code helps in quantifying the performance of a machine learning application.
Use pybind11 To export C++
In this version, in order to export C++ functions to Python, you need to use pybind11 as opposed to SWIG. TensorFlow 2.2 will be using pybind11 as a part of the deprecation of swig efforts where SWIG is an interface compiler that helps in connecting code written in C++ with Python API.
Scalar Type Replaced
The scalar type is replaced for string tensors from std::string to tensorflow::tstring and is now ABI stable.
Tf.distribute
There have been performance improvements for GPU multi-worker distributed training using tf.distribute.experimental.MultiWorkerMirroredStrategy. Also, the tf.keras.layers.experimental.SyncBatchNormalization layer support has been added for global sync BatchNormalization.
Tf.keras
There have been major improvements in Model.fit such as you can now use custom training logic with Model.fit by overriding Model.train_step, easily writing state-of-the-art training loops, see the default Model.train_step, among others. In TensorFlow 2.2, all Keras built-in layers are now supported by the SavedModel format, including metrics, preprocessing layers, and stateful RNN layers.
Keras compile and fit behaviour for functional and subclassed models have been unified, and the model properties such as metrics, metrics_names will now be available only after training or evaluating the model on actual data for functional models. According to the developers, metrics will now include model loss and output losses, while the loss_functions property has been removed from the model.
Tf.lite
You can now enable TFLite experimental new converter by default.
XLA
The XLA now builds and works on windows, and all prebuilt packages come with XLA available. Also, XLA can be enabled for a tf.function with “compile or throw exception” semantics on CPUs and GPUs. The developers have deprecated XLA_CPU and XLA_GPU devices with this release.
Some of the bug fixes are mentioned below:-
- Tf.data: autotune_algorithm has been removed from experimental optimisation options
- TF Core: Eager TensorHandles maintain a list of mirrors for any copies to local or remote devices that avoids any redundant copies due to op execution. For tf.Tensor & tf.Variable, .experimental_ref() is no longer experimental and is available as .ref().
- Tf.keras: experimental_aggregate_gradients argument is added to tf.keras.optimizer.Optimizer.apply_gradients which allows custom gradient aggregation and processing aggregated gradients in a custom training loop.
- TPU Enhancements: TensorFlow 2.2 now supports configuring TPU software version from cloud TPU client.