PyTorch 1.6 Released, Microsoft To Take Care Of The Windows Version of PyTorch

Recently, Facebook announced the availability of the latest version of PyTorch, PyTorch 1.6. The social media giant also made a massive announcement that Microsoft has expanded its participation in the PyTorch community and is taking ownership of the development and maintenance of the PyTorch to build for Windows.

PyTorch is one of the most popular machine learning libraries in Python. The version 1.6 release includes several new APIs, tools for performance improvement and profiling, as well as significant updates to both distributed data-parallel (DDP) and remote procedure call (RPC) based distributed training.

According to the blog post, from this release onward, features will be classified as Stable, Beta and Prototype, where Prototype features are not included as part of the binary distribution and are instead available through either building from source, using nightlies or via a compiler flag. 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

New Features

The significant updates in this version of PyTorch are as below-

Automatic Mixed Precision (AMP) Training

Automatic mixed precision (AMP) training is now natively supported and is a stable feature. AMP allows users to easily enable automatic mixed-precision training allowing higher performance and memory savings of up to 50 per cent on Tensor Core GPUs.  

TensorPipe Backend for RPC

PyTorch 1.6 introduces a new backend for the RPC module which leverages the TensorPipe library. TensorPipe library is a tensor-aware point-to-point communication primitive targeted at machine learning, which is intended to complement the current primitives for distributed training in PyTorch.

Fork Parallelism

PyTorch 1.6 adds support for a language-level construct including runtime support for coarse-grained parallelism in TorchScript code. This feature is useful for running models in an ensemble in parallel, or running bidirectional components of recurrent nets in parallel, and allows the ability to unlock the computational power of parallel architectures for task-level parallelism.

Memory Profiler

The torch.autograd.profiler API now includes a memory profiler that lets you inspect the tensor memory cost of different operators inside your CPU and GPU models.

DDP+RPC

DDP is used for full sync data-parallel training of models, and the RPC framework allows distributed model parallelism. PyTorch 1.6 has combined these two features to achieve both data parallelism and model parallelism at the same time.   

Torchvision 0.7

Torchvision 0.7 introduces two new pre-trained semantic segmentation models, FCN ResNet50 and DeepLabV3 ResNet50, which is both trained on COCO and using smaller memory footprints than the ResNet101 backbone.  

Besides these newly updated features, there are also numerous improvements and new features in distributed training & RPC, domain libraries as well as frontend APIs.  

PyTorch For Windows

Researchers from Microsoft have been working on adding support for PyTorch on Windows. However, due to some limited resources, including lack of test coverage, TorchAudio domain library, distributed training support, among others, Windows support for PyTorch has lagged behind other platforms. 

With the release of PyTorch, the tech giant improved the core quality of the Windows build by bringing test coverage at par with Linux for core PyTorch and its domain libraries, and by automating tutorial testing. 

In a blog post, the developers at Microsoft said, “Thanks to the broader PyTorch community, which contributed TorchAudio support to Windows, we were able to add test coverage to all three domain libraries: TorchVision, TorchText and TorchAudio.”

They added, “In subsequent releases of PyTorch, we will continue improving the Windows experience based on community feedback and requests. So far, the feedback we received from the community points to distributed training support and a better installation experience using pip as the next areas of improvement.” 

Installing On Windows

To install PyTorch using Anaconda with the latest GPU support, run the command below-

conda install pytorch torchvision cudatoolkit=10.2 -c pytorch

More Great AIM Stories

Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

Conference, in-person (Bangalore)
Cypher 2023
20-22nd Sep, 2023

3 Ways to Join our Community

Whatsapp group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our newsletter

Get the latest updates from AIM