In recent news, Facebook has announced the stable release of the popular machine learning library, PyTorch version 1.7.1. The release of version 1.7.1 includes a few bug fixes along with updated binaries for Python version 3.9 and cuDNN 8.0.5. PyTorch is an optimised tensor library for working on deep learning techniques using CPUs and GPUs.
A GPU accelerated tensor computational framework with a Python front end, PyTorch is known to almost all the data scientists and AI researchers who have been contributing to the field of machine learning. This is because of the various interesting features the framework carries within itself.
The framework can bring a high level of flexibility and speed as a deep learning framework and provides accelerated NumPy-like functionality. Alongside, it includes standard defined neural network layers, deep learning optimisers, data loading utilities, and multi-GPU and multi-node support.
Sign up for your weekly dose of what's up in emerging technology.
In October, PyTorch released its 1.7.0 version, where it included a number of new APIs as well as support for NumPy-Compatible FFT operations, profiling tools and major updates to both Distributed Data Parallel (DDP) and remote procedure call (RPC) -based distributed training.
In November, a research scientist from the University of San Francisco, Jeremy Howard tweeted that he hopes PyTorch version 1.7.1 is packaged with cuDNN 8.0.4. To this tweet, PyTorch engineer at NVIDIA who goes with the name, ptrblck replied that the team has been working on updating PyTorch 1.7.1 with cuDNN 8.0.4 or 8.0.5.
To be specific, CUDA Deep Neural Network or cuDNN is a GPU-accelerated library of primitives for deep neural networks provided by NVIDIA. cuDNN provides highly tuned implementations of routines that frequently arise in DNN applications.
Let’s take a look at the new features and bug-fixes that version 1.7.1 has brought in for its users:
Packaged With cuDNN 8.0.5
PyTorch 1.7.1 is packaged with cuDNN 8.0.5, a version that includes fixes from the previous cuDNN v8.0.x release, along with some of the additional changes. For instance, RNN now supports zero-length sequences within the batch. Along with that, significant performance improvements were made for RTX 3090 on many configurations.
Upgrade CUDA Binaries to Use cuDNN 8.0.5
In this version, to use cuDNN 8.0.5, one must upgrade the CUDA Binaries. This, in result, will fix the regressions on Ampere cards that were introduced in cuDNN 8.0.4. This means it will not only update the CUDA binaries but also improve performance for 3090 RTX cards and other RTX-30 series cards.
Add Python 3.9 Binaries For Linux, Windows and macOS
This version requires installing Conda for Python 3.9, which in turn requires the conda-forge channel.
For instance: conda install -y -c pytorch -c conda-forge pytorch.
Also, here are some of the highlights from the bug fixes:
- In the case of Python 3.9, one can now use the custom version of pybind11 to work around Python 3.9 issues. It has fixed both the jit Python 3.9 parsing and the cpp_extension to work with Python 3.9.
- The ‘cpp_extension’ to properly handle environment variables on Windows is fixed.
- Properly package libomp.dylib for macOS binaries.
- Build for statically linked OpenBLAS on aarch64
- Other fixes include Tensor Expression fix for CUDA 11.0, adding user-friendly error when trying to compile from source with Python 2, among others.