Listen to this story
|
The PyTorch Foundation announced the first experimental release of the much-anticipated PyTorch 2.0, at the recently held PyTorch Conference.
It is the first step towards the next generation 2-series release of PyTorch. While the beta version has been released for now, the first stable 2.0 release is expected in March 2023.
PyTorch 2.0 continues to offer the user experience; however, it fundamentally changes how PyTorch operates at compiler level under the hood, the Foundation said.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
One of the most talked about features of the new version is the 100% backward compatibility.With the new version, data scientists can continue doing the same things they did with the previous version, but in a much faster way.
PyTorch was introduced in 2016 as a deep learning platform that focuses on usability and speed by offering an imperative and Pythonic programming style.
Download our Mobile App
PyTorch supports code as a model that remains efficient and supports hardware accelerators (like GPUs), while making debugging easy. It stood tall against a machine learning platform called TensorFlow from Google, which was introduced just a year earlier.
torch.compile
One of the main features that PyTorch 2.0 brings is torch.compile( ). This feature is intended to change the compilation behaviour in favour of increased speed and whose components are written in Python.
The new feature pushes PyTorch’s performance to new heights and moves some of the components of PyTorch from C++ back into Python.
Soumith Chintala, lead maintainer, PyTorch, believes that the new version will significantly change the way people use PyTorch in day to day tasks.
“torch.compile() makes it easy to experiment with different compiler backends to make PyTorch code faster with a single line decorator.”
“It works either directly over an nn.Module as a drop-in replacement for torch.jit.script() but without requiring you to make any source code changes,” Mark Saroufim, AI Engineer at Meta, said in a blogpost.
PyTorch 2.0 is announced!
— Yann LeCun (@ylecun) December 2, 2022
Main new feature:
cmodel = torch.compile(model)
Faster training with no code modification.
Available in nightly build.
Stable release scheduled for early March https://t.co/WSUrdEF6Hv
The Foundation further revealed that to validate the technology, they used a diverse set of 163 different open-source models—46 models from HuggingFace Transformers, 61 models from TIMM and 56 models from TorchBench.
“torch.compile works around 93% of the time, and the model runs 43% faster in training on an NVIDIA A100 GPU,” the Foundation added.
Reactions
“We tried it out in the past few weeks and here are the speedups we observed in our canonical training examples,” Hugginface said.

However, not everyone is in agreement. “I’ve benchmarked the new `torch.compile` on CLIP Image Encoder and I’ve seen ZERO improvements (on my 3090), not sure if I did something wrong,” Francesco Saverio Zuppichini, Computer Vision Engineer at Roboflow, said, in a LinkedIn post.
In this regard, Sylvain Gugger, engineer at HugginFace, said that one must use an Ampere GPU. “I did all my benchmarks on a cloud A100. An RTX3090 should work as well, but for older GPUs you won’t see a real improvement.”
Similarly, Diego Fiori, co-founder and CTO at Nebuly is of the opinion that PyTorch 2.0 becomes more and more effective compared to previous versions with larger batch size.
“ONNX Runtime performs much better than PyTorch 2.0 at smaller batch sizes, while the result is the opposite at larger batch sizes. Again, this is because ONNX Runtime was designed mainly for inference (where usually smaller batch sizes are used), while, as stated before, PyTorch 2.0’s main goal is training,” Fiori added.
However, we must also keep in mind that the Beta version is available for now, and the actual version is scheduled to release only in March 2023. Hence, while there might be issues with the beta version, they are likely to get resolved only when the final versions goes live.