What’s new in PyTorch 1.11

functorch aims to provide composable vmap (vectorization) and autodiff transforms that work well with PyTorch modules and PyTorch autograd.

PyTorch 1.11 was released on 10 March 2022. The latest iteration comprises over 3,300 fresh commits from 434 contributors. PyTorch has also released beta versions of two new libraries, TorchData and functorch.

Now you can copy all attributes on Tensor objects cleanly, rather than just the plain Tensor properties, in Python API. Steps argument is no longer optional in torch.linspace and torch.logspace. The argument used to default to 100 in PyTorch 1.10.2. Now, it’s no longer an option. PyTorch has also removed torch.hub.import_module function that was mistakenly public. Calling x.T on tensors with a different number of dimensions has been deprecated. Now, it will only accept tensors with 0 or 2 dimensions.

C++ frontend headers are now reduced to only include the subset of aten operators that are used. Now, if users include a header from the C++ frontend, aten operators may not be transitively included. However, users can directly add #include <ATen/ATen.h> in their file to maintain the old behaviour of including every aten operators. PyTorch 1.11 has also removed Custom implementation for c10::List and c10::Dict move constructors. The semantics have changed from “make the moved-from List/Dict empty” to “keep the moved-from List/Dict unchanged.”


Sign up for your weekly dose of what's up in emerging technology.

For CUDA, THCeilDiv function and corresponding THC/THCDeviceUtils.cuh header, THCudaCheck, and THCudaMalloc(), THCudaFree(), THCThrustAllocator.cuh have been removed.

PyTorch 1.11. has also stopped building shared library for AOT Compiler, libaot_compiler.so. typing.Union type is now unsupported for mobile builds due to its lack of use and increase in binary size of PyTorch for Mobile builds. getitem used to be quantized in FX Graph Mode Quantization, and it is no longer quantized. Users should now use fuse_modules for PTQ fusion or fuse_modules_qat for QAT fusion. Users need to use torch.ao.quantization.QConfig as torch.ao.quantization.QConfigDynamic is deprecated and is expected to be removed in the next release.

For ONNX, PyTorch 1.11 has removed f arg from onnx.export_to_pretty_string, use_external_data_format arg from onnx.export,  example_outputs arg from torch.onnx.export, enable_onnx_checker arg from onnx.export, and _retain_param_name arg from onnx.export. onnx.utils.ONNXCheckerError  is now moved and renamed to onnx.CheckerError.

New features

For Python API, PyTorch 1.11 has added set_deterministic_debug_mode and get_deterministic_debug_mode, n-dimensional Hermitian FFT: torch.fft.ifftn and torch.fft.hfftn, Wishart distribution to torch.distributions. PyTorch added preliminary support for the Python Array API standard to the torch and torch.linalg modules. It implements over 90% of the operators defined by the Python Array API, including the torch.from_dlpack operation for improved DLPack support. They’ve also moved torch.testing from prototype to beta.

For Autograd, PyTorch 1.11 has a new torch.utils.checkpoint implementation that doesn’t use reentrant autograd. Forward mode AD now has support for most ops and includes ctx.save_for_forward function to autograd.Function. autograd.forward_ad.unpack_dual will now return a named tuple instead of plain tuple.

Linear algebra operation support includes forward AD support for torch.linalg.{eig, inverse, householder_product, qr} and torch.*_solve. They’ve also added forward and backward AD support for torch.linalg.lstsq  and a wider range of inputs for linalg.pinv.

For ONNX, Pytorch 1.11 supports opset version 15,  exporting nn.Module calls as ONNX local functions, exporting new ops such as tanhshrink, hardshrink, softshrink, __xor__, isfinite log10, and diagonal. It also supports exporting with Apex O2. 

For Infra (Releng), Pytorch 1.11 has added support for ROCm 4.3.1, ROCm 4.5.2, CUDA 11.5, CUDA enabled Bazel builds, Python 3.10

Pytorch 1.11 has now introduced FlexiBLAS build support, IS_LINUX and IS_MACOS global vars for cpp extensions building, ARC for iOS CMake builds and support for IBM z14/15 SIMD. 

The new update also includes an experimental flag that allows users to specify a preferred linear algebra library. Operations like linalg.matrix_exp, linalg.cross, and the linalg.diagonal (an alias for torch.diagonal) have been added.

For CUDA, the new update introduced the Jiterator that enables users to compile rarely used CUDA kernels at runtime. cuSPARSE descriptors and updated CSR addmm, addmv_out, nvidia-smi memory and utilization as native Python API have also been added.

For Vulkan, Pytorch 1.11 has added support for several torch operators such as torch.cat, torch.nn “.ConvTranspose2d , torch.permute , Tensor indexing (at::slice), and torch.clone.  The new Pytorch iteration also includes a Tracing Based Selective Build feature to reduce a mobile model’s binary size by including the operators that the model uses.

Click here to read more.


Pytorch has also released TorchData, a library of common modular data loading primitives for easily constructing flexible and performant data pipelines.

The product enables composable data loading via Iterable-style and Map-style building blocks called “DataPipes,” which work well out of the box with PyTorch’s DataLoader.

Users can connect multiple DataPipes to form a data pipeline that performs all data transformations.

TorchData has implemented over 50 DataPipes for core functionalities such as file opening, text parsing, sample transformation, caching, shuffling, and batching. Users who want to connect to cloud providers (such as Google Drive or AWS S3) can do so using the fsspec and iopath DataPipes. Each IterDataPipe and MapDataPipe has detailed explanations and usage examples in the documentation.

In this release, some of the PyTorch domain libraries have migrated their datasets to DataPipes. TorchText’s popular datasets are implemented using DataPipes, and a section of its SST-2 binary text classification tutorial shows how to use DataPipes to preprocess data for the model.


Functorch, inspired by Google JAX, aims to provide composable vmap (vectorization) and autodiff transforms that work well with PyTorch modules and PyTorch autograd.

The library can help users in computing per-sample-gradients, running ensembles of models on a single machine, batching together tasks in the inner-loop of MAML, and computing Jacobians and Hessians as well as batched ones.

More Great AIM Stories

Sri Krishna
Sri Krishna is a technology enthusiast with a professional background in journalism. He believes in writing on subjects that evoke a thought process towards a better world. When not writing, he indulges his passion for automobiles and poetry.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

Conference, in-person (Bangalore)
Cypher 2023
20-22nd Sep, 2023

3 Ways to Join our Community

Whatsapp group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our newsletter

Get the latest updates from AIM