Listen to this story
In the latest update of TensorFlow 2.15, several significant improvements have been introduced. One notable enhancement is the performance optimisation of oneDNN for CPUs on Windows x64 and x86 platforms. This optimisation is automatically enabled for X86 CPUs and can be customised using an environment variable. These optimisations may lead to slightly different numerical results but aim to boost performance.
Another key improvement is the expansion of the ‘tf.function’ type system. This update allows for more control and flexibility when working with TensorFlow functions. It introduces ‘tf.types.experimental.TraceType’ to handle custom TensorFlow function inputs, ‘tf.types.experimental.FunctionType’ to comprehensively represent function signatures, and ‘tf.types.experimental.AtomicFunction’ for fast TensorFlow computations in Python.
TensorFlow’s data processing capabilities have also been refined. The option ‘warm_start’ has been moved to ‘tf.data.Options’, simplifying data handling and offering more control.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Moreover, TensorFlow 2.15 introduces bug fixes and additional changes. One notable addition is the TensorFlow Quantizer in the TensorFlow pip package, which aids in quantizing models. Additionally, it brings an option to make the gradient output of specific functions sparse instead of dense.
TensorFlow Lite (tf.lite) has received several updates, including support for broadcasting in certain operations and the promotion of the `tflite::SignatureRunner` class, which simplifies working with named parameters and computations within TF Lite models. This enhancement removes its experimental status.
Keras, a high-level neural networks API, has received updates as well, including bug fixes, new ops in ‘tensorflow.raw_ops’, and the addition of the ‘tf.CheckpointOptions’ argument for executing callbacks during checkpoint saving. There’s also an option to control the behaviour of the eager runtime when executing parallel remote function invocations.