MITB Banner

Highlights of the new PyTorch v1.12

The new release contains 3124 commits and is developed with the help of 433 contributors.

Share

Close to four months after the release of version 1.11, PyTorch has now introduced v1.12. The new release contains 3124 commits and is developed with the help of 433 contributors. Some of the highlights of this release are – a functional API to apply module computation with a set of given parameters; TorchData’s DataPipes that are fully backwards compatible with DataLoader; functorch with improved API coverage; Complex32 and Complex Convolutions; TorchArrow, and others.

Along with the release of v1.12, the team released the beta versions of AWS S3 Integration, PyTorch Vision Models on Channel Last on CPU, and supporting PyTorch on Intel® Xeon® Scalable processors with Bfloat16 and FSDP API. 

TorchArrow

TorchArrow is a library for machine learning preprocessing over batch data. Introduced as a Beta release, TorchArrow features a performant and Pandas-style, easy-to-use API to speed up preprocessing workflows and development. Some of the features it offers include a high-performance CPU backend, vectorised and extensible UDFs with Velox; seamless handoff with PyTorch; zero-copy for external readers through Arrow in-memory columnar format.

Functional API for Modules

PyTorch v1.12 also has a new beta feature for functionally applying Module computation with a set of parameters. The traditional PyTorch Module usage pattern that maintains a static set of parameters internally is restrictive. This usually happens when implementing algorithms for meta-learning since multiple sets of parameters need to be maintained across optimiser steps.

Some of its features include: Module computation with flexibility over parameters; reimplementation of module in a functional way is not needed; parameter or buffer present in the module can be swapped with externally-defined value for use in the call.

Complex32 and Complex Convolutions in PyTorch

Now PyTorch supports complex numbers, complex autograd, complex modules, and other complex operations. Several libraries like torchaudio and ESPNet use complex numbers in PyTorch, and the new version further extends the complex functionality with complex convolutions and the experimental complex32 data type that enables half-precision FFT operations. 

Share
Picture of Shraddha Goled

Shraddha Goled

I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India