Recently, Facebook announced the availability of the latest version of PyTorch, PyTorch 1.6. The social media giant also made a massive announcement that Microsoft has expanded its participation in the PyTorch community and is taking ownership of the development and maintenance of the PyTorch to build for Windows.
PyTorch is one of the most popular machine learning libraries in Python. The version 1.6 release includes several new APIs, tools for performance improvement and profiling, as well as significant updates to both distributed data-parallel (DDP) and remote procedure call (RPC) based distributed training.
According to the blog post, from this release onward, features will be classified as Stable, Beta and Prototype, where Prototype features are not included as part of the binary distribution and are instead available through either building from source, using nightlies or via a compiler flag.
The significant updates in this version of PyTorch are as below-
Automatic Mixed Precision (AMP) Training
Automatic mixed precision (AMP) training is now natively supported and is a stable feature. AMP allows users to easily enable automatic mixed-precision training allowing higher performance and memory savings of up to 50 per cent on Tensor Core GPUs.
TensorPipe Backend for RPC
PyTorch 1.6 introduces a new backend for the RPC module which leverages the TensorPipe library. TensorPipe library is a tensor-aware point-to-point communication primitive targeted at machine learning, which is intended to complement the current primitives for distributed training in PyTorch.
PyTorch 1.6 adds support for a language-level construct including runtime support for coarse-grained parallelism in TorchScript code. This feature is useful for running models in an ensemble in parallel, or running bidirectional components of recurrent nets in parallel, and allows the ability to unlock the computational power of parallel architectures for task-level parallelism.
The torch.autograd.profiler API now includes a memory profiler that lets you inspect the tensor memory cost of different operators inside your CPU and GPU models.
DDP is used for full sync data-parallel training of models, and the RPC framework allows distributed model parallelism. PyTorch 1.6 has combined these two features to achieve both data parallelism and model parallelism at the same time.
Torchvision 0.7 introduces two new pre-trained semantic segmentation models, FCN ResNet50 and DeepLabV3 ResNet50, which is both trained on COCO and using smaller memory footprints than the ResNet101 backbone.
Besides these newly updated features, there are also numerous improvements and new features in distributed training & RPC, domain libraries as well as frontend APIs.
PyTorch For Windows
Researchers from Microsoft have been working on adding support for PyTorch on Windows. However, due to some limited resources, including lack of test coverage, TorchAudio domain library, distributed training support, among others, Windows support for PyTorch has lagged behind other platforms.
With the release of PyTorch, the tech giant improved the core quality of the Windows build by bringing test coverage at par with Linux for core PyTorch and its domain libraries, and by automating tutorial testing.
In a blog post, the developers at Microsoft said, “Thanks to the broader PyTorch community, which contributed TorchAudio support to Windows, we were able to add test coverage to all three domain libraries: TorchVision, TorchText and TorchAudio.”
They added, “In subsequent releases of PyTorch, we will continue improving the Windows experience based on community feedback and requests. So far, the feedback we received from the community points to distributed training support and a better installation experience using pip as the next areas of improvement.”
Installing On Windows
To install PyTorch using Anaconda with the latest GPU support, run the command below-
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
What's Your Reaction?
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box. Contact: firstname.lastname@example.org