What’s New in the Latest TensorFlow 2.10

TensorFlow’s team is now introducing Decision Forests 1.0 with the latest release
Listen to this story

Last week, Google released the latest version of its popular open-source software library, TensorFlow 2.10. The launch comes three months after the introduction of TensorFlow 2.9. In 2022 alone, the TensorFlow team has released three versions – 2.8, 2.9, and 2.10.

The highlights of TensorFlow 2.10 include user-friendly updates to Keras to help users develop transformers, updates to optimizers API, deterministic and stateless initializers, and even a new tool to load audio data. Furthermore, like in the case of TensorFlow 2.9, in this version, too, the creators have more enhancements with oneDNN. TensorFlow’s team is now introducing Decision Forests 1.0 with the latest release.

Updates to Keras

For v2.9, TensorFlow had also released a new experimental version of Keras Optimizer API for a unified and expanded catalogue of built-in and customised optimisers. The then introduced optimizers experimental version will replace the current version namespace in the next release (TensorFlow v2.11). To prepare for the formal switch of the optimizer namespace to the new API, the team has initiated the export of current Keras optimizers under tf.keras.optimizers.legacy in v2.10.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

It may be noted that TensorFlow had introduced an API to make ops deterministic in v2.8; in v2.9, the team worked to improve its performance. Built on top of stateless TF random ops, the team has now made Keras intializers stateless and deterministic. From the new version, both seeded and unseeded initializers will generate the same values every time they are called. With a stateless initializer, Keras will be able to support new features like multi-client model training with DTensor (a TensorFlow extension for synchronous distributed computing).

The team has expanded and unified mask handling for Keras attention layers like tf.keras.layers.Attention, tf.keras.layers.AdditiveAttention, and tf.keras.layers.MultiHeadAttention. In addition to that, two features have been added – causal attention and implicit masking. These two features, when combined, simplify the implementation of a Transformer-style model – solving the tricky problem of getting the mask right.


In v2.9, TensorFlow integrated the oneDNN performance library to achieve better performance on Intel CPUs. In this version, oneDNN optimisations would be turned on by default on Linux x86 packages and for CPUs with neural-network-focused hardware features which are found on Intel Cascade Lake and newer CPUs.

Moving forward, in v2.10, TensorFlow has collaborated with Arm AWS and Linaro to integrate Compute Library for the Arm Architecture through oneDNN. This integration would accelerate performance on aarch64 CPUs.

TensorFlow Decision Forests

TensorFlow Decision Forests is a collection of state-of-the art algorithms for training and interpretation of Decision Forest models. It is a collection of Keras models that supports classification, ranking, and regression.

With the release of TensorFlow 2.10, the Decision Forests are in v1.0, making a more stable and mature library. With improved documentation and more comprehensive testing, the company claims that TensorFlow Decision Tree is ready for use in professional environments. The new release offers Javascript and Go APIs for inference of the Decision Tree models. These APIs are still in beta and the team is seeking feedback on the same. 

TensorFlow Decision Tree 1.0 improves the performance of oblique splits, which recursively divide feature space by making splits depending on the linear combinations of attributes. They allow decision trees to express complex patterns by conditioning them on multiple features at the same time. It has been shown that oblique splits outperform axis-aligned splits on a majority of datasets.

Shraddha Goled
I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.

Download our Mobile App


AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.