Close to four months after the release of version 1.11, PyTorch has now introduced v1.12. The new release contains 3124 commits and is developed with the help of 433 contributors. Some of the highlights of this release are – a functional API to apply module computation with a set of given parameters; TorchData’s DataPipes that are fully backwards compatible with DataLoader; functorch with improved API coverage; Complex32 and Complex Convolutions; TorchArrow, and others.
Along with the release of v1.12, the team released the beta versions of AWS S3 Integration, PyTorch Vision Models on Channel Last on CPU, and supporting PyTorch on Intel® Xeon® Scalable processors with Bfloat16 and FSDP API.
TorchArrow
TorchArrow is a library for machine learning preprocessing over batch data. Introduced as a Beta release, TorchArrow features a performant and Pandas-style, easy-to-use API to speed up preprocessing workflows and development. Some of the features it offers include a high-performance CPU backend, vectorised and extensible UDFs with Velox; seamless handoff with PyTorch; zero-copy for external readers through Arrow in-memory columnar format.
Functional API for Modules
PyTorch v1.12 also has a new beta feature for functionally applying Module computation with a set of parameters. The traditional PyTorch Module usage pattern that maintains a static set of parameters internally is restrictive. This usually happens when implementing algorithms for meta-learning since multiple sets of parameters need to be maintained across optimiser steps.
Some of its features include: Module computation with flexibility over parameters; reimplementation of module in a functional way is not needed; parameter or buffer present in the module can be swapped with externally-defined value for use in the call.
Complex32 and Complex Convolutions in PyTorch
Now PyTorch supports complex numbers, complex autograd, complex modules, and other complex operations. Several libraries like torchaudio and ESPNet use complex numbers in PyTorch, and the new version further extends the complex functionality with complex convolutions and the experimental complex32 data type that enables half-precision FFT operations.