After months of highlighting the features at the PyTorch Conference, the new release of PyTorch 2.0 is finally here. The new version offers the same experience while supercharging the compiler abilities including support for Dynamic Shapes and Distributed. The push is completely towards making it more “Pythonic”.
The release also includes a stable version of Accelerated Transformers. The Beta also includes torch.compile as the main API for PyTorch 2.0, which wraps the model, returning it in a compiled mode. Being fully additive, the new version is also fully backward compatible. The focus is the performance.
To learn more about the release in detail, click here.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
We’re excited to announce the release of PyTorch 2.0!
— PyTorch (@PyTorch) March 15, 2023
This version includes:
⚙️ 100% backward compatible
📦 Out of the box performance
📶 Significant speed improvements
Learn more 👇https://t.co/8vV61atP6E pic.twitter.com/NQQEEUUJNl
New Features
The new release includes libraries such as TorchAudio, TorchVision, and TorchText. The release is clearly focused on making ML model deployments easier and lightning fast.
Download our Mobile App
TorchInductor, a foundational technology for torch.compile, will utilize Nvidia and AMD GPUs and leverage the OpenAI Triton deep learning compiler to generate efficient code while concealing hardware-specific intricacies.
With a tailored kernel architecture for scaled dot product attention (SPDA), Accelerated Transformers bring speedy training and inference capabilities to the forefront. The API is integrated into torch.compile(), while the scaled_dot_product_attention() operator can also be invoked directly by model developers.
The Metal Performance Shaders (MPS) backend furnishes PyTorch training on Mac platforms with GPU acceleration, and now encompasses over 300 operators, including the top 60 most commonly used ones.
The formation of the PyTorch Foundation in September 2022 aimed to foster greater collaboration and contributions, resulting in more open governance. The dividends have already paid off, with the beta version of PyTorch 2.0 being previewed in December 2022, thanks to the contributions of 428 different individuals, who have provided new code and capabilities to this open-source initiative.