Listen to this story
|
After months of highlighting the features at the PyTorch Conference, the new release of PyTorch 2.0 is finally here. The new version offers the same experience while supercharging the compiler abilities including support for Dynamic Shapes and Distributed. The push is completely towards making it more “Pythonic”.
The release also includes a stable version of Accelerated Transformers. The Beta also includes torch.compile as the main API for PyTorch 2.0, which wraps the model, returning it in a compiled mode. Being fully additive, the new version is also fully backward compatible. The focus is the performance.
To learn more about the release in detail, click here.
New Features
The new release includes libraries such as TorchAudio, TorchVision, and TorchText. The release is clearly focused on making ML model deployments easier and lightning fast.
TorchInductor, a foundational technology for torch.compile, will utilize Nvidia and AMD GPUs and leverage the OpenAI Triton deep learning compiler to generate efficient code while concealing hardware-specific intricacies.
With a tailored kernel architecture for scaled dot product attention (SPDA), Accelerated Transformers bring speedy training and inference capabilities to the forefront. The API is integrated into torch.compile(), while the scaled_dot_product_attention() operator can also be invoked directly by model developers.
The Metal Performance Shaders (MPS) backend furnishes PyTorch training on Mac platforms with GPU acceleration, and now encompasses over 300 operators, including the top 60 most commonly used ones.
The formation of the PyTorch Foundation in September 2022 aimed to foster greater collaboration and contributions, resulting in more open governance. The dividends have already paid off, with the beta version of PyTorch 2.0 being previewed in December 2022, thanks to the contributions of 428 different individuals, who have provided new code and capabilities to this open-source initiative.