After version 2.4, the Google Brain team has now released the upgraded version of TensorFlow, version 2.5.0. The latest version comes with several new and improved features. TensorFlow 2.5 now supports Python 3.9, and TensorFlow pip packages are now built with CUDA11.2 and cuDNN 8.1.0.
In this article, we discuss the major updates and features of TensorFlow 2.5.0.
oneAPI Deep Neural Network Library (oneDNN)
oneDNN or oneAPI Deep Neural Network Library CPU performance optimizations from the Intel-optimized TensorFlow are available in the official x86-64 Linux as well as Windows builds. oneDNN is an open-source cross-platform performance library offering basic building blocks for deep learning applications.
Sign up for your weekly dose of what's up in emerging technology.
It is intended for developers interested in improving application performance on Intel CPUs and GPUs. The library is mainly optimized for various Intel products, such as Intel Architecture Processors, Intel Processor Graphics and Xe architecture-based Graphics. However, the developers do not recommend using them in GPU systems, as they have not been sufficiently tested with GPUs yet.
The third-party devices can now connect to TensorFlow modularly through StreamExecutor C API and PluggableDevice interface. Developers can now add custom ops and kernels through the kernel and op registration C API as well as register custom graph optimization passes with graph optimization C API.
TPU embedding support
For TPU embedding support, this new version has added the profile_data_directory to EmbeddingConfigSpec in _tpu_estimator_embedding.py. This allows the embedding lookup statistics that are gathered at runtime to be used in embedding layer partitioning decisions.
The tf.data service now supports strict round-robin reads, useful for synchronous training workloads where example sizes vary. With strict round-robin reads, users can guarantee that consumers get similar-sized examples in the same step.
tf.data service also supports optional compression. Now you can disable compression by passing compression=None to tf.data.experimental.service.distribute(…).
The tf.data service supports arguments that indicate multiple input batches should be computed in parallel. Additionally, the tf.data input pipelines can be executed in debug mode, which disables any asynchrony, parallelism, or non-determinism and forces Python execution (as opposed to trace-compiled graph execution) of user-defined functions passed into transformations such as map.
The TensorFlow version 2.5.0 comes with many bug fixes:
- Keras inputs can now be created directly from arbitrary tf.TypeSpecs.
- Two new learning rate schedules added: tf.keras.optimizers.schedules.CosineDecay and tf.keras.optimizers.schedules.CosineDecayRestarts.
- Exposing tf.data.experimental.ExternalStatePolicy can be used to control how external state should be handled during dataset serialization or iterator checkpointing.
- The Python TF Lite Interpreter bindings now have an option experimental_preserve_all_tensors to aid in debugging conversion.
- Enabled post-training with calibrations for models that require user-provided TensorFlow Lite custom op libraries via converter.target_spec._experimental_custom_op_registerers. used in Python Interpreter API.
- Add new enum value MLIR_BRIDGE_ROLLOUT_SAFE_MODE_ENABLED to tf.config.experimental.mlir_bridge_rollout to enable a “safe” mode. This runs the MLIR bridge only when an analysis of the graph determines that it is safe to run.
The TensorFlow team at Google Brain Research has been working to enhance and update its popular machine learning platform. The team has recently launched a pose detection model, with an API available in TensorFlow.js, known as MoveNet.
The GitHub link can be found here.