Pytorch 2.0 Promises 100% Backward Compatibility

The new feature pushes PyTorch’s performance to new heights and moves some of the components of PyTorch from C++ back into Python.
Listen to this story

The PyTorch Foundation announced the first experimental release of the much-anticipated PyTorch 2.0, at the recently held PyTorch Conference. 

It is the first step towards the next generation 2-series release of PyTorch. While the beta version has been released for now, the first stable 2.0 release is expected in March 2023.

PyTorch 2.0 continues to offer the user experience; however, it fundamentally changes how PyTorch operates at compiler level under the hood, the Foundation said.

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

One of the most talked about features of the new version is the 100% backward compatibility.With the new version, data scientists can continue doing the same things they did with the previous version, but in a much faster way.  

PyTorch was introduced in 2016 as a deep learning platform that focuses on usability and speed by offering an imperative and Pythonic programming style. 


Download our Mobile App



PyTorch supports code as a model that remains efficient and supports hardware accelerators (like GPUs), while making debugging easy. It stood tall against a machine learning platform called TensorFlow from Google, which was introduced just a year earlier.

torch.compile

One of the main features that PyTorch 2.0 brings is torch.compile( ). This feature is intended to change the compilation behaviour in favour of increased speed and whose components are written in Python. 

The new feature pushes PyTorch’s performance to new heights and moves some of the components of PyTorch from C++ back into Python.

Soumith Chintala, lead maintainer, PyTorch, believes that the new version will significantly change the way people use PyTorch in day to day tasks.

“torch.compile() makes it easy to experiment with different compiler backends to make PyTorch code faster with a single line decorator.”

“It works either directly over an nn.Module as a drop-in replacement for torch.jit.script() but without requiring you to make any source code changes,” Mark Saroufim, AI Engineer at Meta, said in a blogpost. 

The Foundation further revealed that to validate the technology, they used a diverse set of 163 different open-source models—46 models from HuggingFace Transformers, 61 models from TIMM and 56 models from TorchBench.

“torch.compile works around 93% of the time, and the model runs 43% faster in training on an NVIDIA A100 GPU,” the Foundation added.

Reactions

“We tried it out in the past few weeks and here are the speedups we observed in our canonical training examples,” Hugginface said.

However, not everyone is in agreement. “I’ve benchmarked the new `torch.compile` on CLIP Image Encoder and I’ve seen ZERO improvements (on my 3090), not sure if I did something wrong,” Francesco Saverio Zuppichini, Computer Vision Engineer at Roboflow, said, in a LinkedIn post.

In this regard, Sylvain Gugger, engineer at HugginFace, said that one must use an Ampere GPU. “I did all my benchmarks on a cloud A100. An RTX3090 should work as well, but for older GPUs you won’t see a real improvement.”

Similarly, Diego Fiori, co-founder and CTO at Nebuly is of the opinion that PyTorch 2.0 becomes more and more effective compared to previous versions with larger batch size.

“ONNX Runtime performs much better than PyTorch 2.0 at smaller batch sizes, while the result is the opposite at larger batch sizes. Again, this is because ONNX Runtime was designed mainly for inference (where usually smaller batch sizes are used), while, as stated before, PyTorch 2.0’s main goal is training,” Fiori added.

However, we must also keep in mind that the Beta version is available for now, and the actual version is scheduled to release only in March 2023. Hence, while there might be issues with the beta version, they are likely to get resolved only when the final versions goes live.

Sign up for The Deep Learning Podcast

by Vijayalakshmi Anandan

The Deep Learning Curve is a technology-based podcast hosted by Vijayalakshmi Anandan - Video Presenter and Podcaster at Analytics India Magazine. This podcast is the narrator's journey of curiosity and discovery in the world of technology.

Pritam Bordoloi
I have a keen interest in creative writing and artificial intelligence. As a journalist, I deep dive into the world of technology and analyse how it’s restructuring business models and reshaping society.

Our Upcoming Events

24th Mar, 2023 | Webinar
Women-in-Tech: Are you ready for the Techade

27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023 [AI100 Awards]

21 Jul, 2023 | New York
MachineCon USA 2023 [AI100 Awards]

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Council Post: The Rise of Generative AI and Living Content

In this era of content, the use of technology, such as AI and data analytics, is becoming increasingly important as it can help content creators personalise their content, improve its quality, and reach their target audience with greater efficacy. AI writing has arrived and is here to stay. Once we overcome the initial need to cling to our conventional methods, we can begin to be more receptive to the tremendous opportunities that these technologies present.