With more power and productivity than the earlier versions, the final release of TensorFlow 2.0 has been announced by Google. In March 2019, TensorFlow Alpha was released at the TensorFlow Dev Summit. Internally developed by Google Brain in 2015, the platform has soon gained popularity and become one of the favourites for machine learning researchers.
TensorFlow is one of the most flexible platforms to perform machine learning problems. It provides a suitable abode with essential tools for ML researchers and developers in order to perform SOTA machine learning applications.
In one of our earlier articles, we discussed the key features which TensorFlow 2.0 Alpha version has been redesigned with. We discussed features like:
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
- The replacement of APIs
- Python-like execution
- Control over variables
- Graph mode functions and autograph
- Managing variables with Keras
The TensorFlow 2.0 release also includes an automatic conversion script in order to assist you in migration from TensorFlow 1.x to TensorFlow 2.0. In this article, we will discuss the furthermore enhancements that the platform has brought with the final release for the machine learning enthusiasts.

SavedModel Format
TensorFlow 2.0 is standardised on SavedModel file format in order to run machine learning models on a number of runtimes such as browser, web, cloud, Node.js and other such. Basically, a SavedModel contains a complete TensorFlow program, including weights and computation and it does not require the original model building code to run and thus makes it easy for deploying or sharing with TensorFlow Hub, TensorFlow Serving, TFLite or TensorFlow.js.
Distribution Strategy API
Distribution Strategy API is a TensorFlow API to distribute training across multiple GPUs, multiple machines or TPUs. This API helps in distributing the existing machine learning models and training code with minimal code changes. It helps in attaining high-performance training of a model. This API has certain useful features such as it is easy to use and support multiple user segments, including researchers, ML engineers, etc., provides good performance out of the box as well as easy switching between strategies.
Multi-GPU Support
The new version provides several performance improvements on GPUs as well as multi-GPU support. According to the researchers, the simplest way to run on multiple GPUs in TensorFlow, on one or many machines, is simply implementing Distribution Strategies. This support is initiated with a purpose to allow a machine learning model to scale with more resources.
Performance Improvements on GPUs
The new release claims to provide better performance on the acceleration of GPUs. The platform utilises an improved API for maintaining the high-performance during inference on NVIDIA T4 cloud GPUs on Google Cloud. According to a blogpost by TensorFlow researchers, TensorFlow 2.0 delivers up to three times faster training performance using mixed precision on Volta and Turing GPUs with a few lines of code, used for example in ResNet-50 and BERT.
Outlook
TensorFlow is all about supporting the machine learning developer’s community with a flexible, powerful and easy-to-use platform which supports deployment in any platform. Not only Python developers but Java developers can also use this platform with ease and implement machine learning directly in the browser or in Node.js. Furthermore, the company is stressing on integrating Swift language with TensorFlow in order to create a platform for deep learning and differentiable programming. However, the next release is not yet mentioned but the researchers at Google claimed that Cloud TPU support will be coming in a future release.