Active Hackathon

Highlights of the new PyTorch v1.12

The new release contains 3124 commits and is developed with the help of 433 contributors.

Close to four months after the release of version 1.11, PyTorch has now introduced v1.12. The new release contains 3124 commits and is developed with the help of 433 contributors. Some of the highlights of this release are – a functional API to apply module computation with a set of given parameters; TorchData’s DataPipes that are fully backwards compatible with DataLoader; functorch with improved API coverage; Complex32 and Complex Convolutions; TorchArrow, and others.

Along with the release of v1.12, the team released the beta versions of AWS S3 Integration, PyTorch Vision Models on Channel Last on CPU, and supporting PyTorch on Intel® Xeon® Scalable processors with Bfloat16 and FSDP API. 


Sign up for your weekly dose of what's up in emerging technology.


TorchArrow is a library for machine learning preprocessing over batch data. Introduced as a Beta release, TorchArrow features a performant and Pandas-style, easy-to-use API to speed up preprocessing workflows and development. Some of the features it offers include a high-performance CPU backend, vectorised and extensible UDFs with Velox; seamless handoff with PyTorch; zero-copy for external readers through Arrow in-memory columnar format.

Functional API for Modules

PyTorch v1.12 also has a new beta feature for functionally applying Module computation with a set of parameters. The traditional PyTorch Module usage pattern that maintains a static set of parameters internally is restrictive. This usually happens when implementing algorithms for meta-learning since multiple sets of parameters need to be maintained across optimiser steps.

Some of its features include: Module computation with flexibility over parameters; reimplementation of module in a functional way is not needed; parameter or buffer present in the module can be swapped with externally-defined value for use in the call.

Complex32 and Complex Convolutions in PyTorch

Now PyTorch supports complex numbers, complex autograd, complex modules, and other complex operations. Several libraries like torchaudio and ESPNet use complex numbers in PyTorch, and the new version further extends the complex functionality with complex convolutions and the experimental complex32 data type that enables half-precision FFT operations. 

More Great AIM Stories

Shraddha Goled
I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at

Our Upcoming Events

Conference, Virtual
Genpact Analytics Career Day
3rd Sep

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM

The curious case of Google Cloud revenue

Porat had earlier said that Google Cloud was putting in money to make more money, but even with the bucket-loads of money that it was making, profitability was still elusive.

Global Parliaments can do much more with Artificial Intelligence

The world is using AI to enhance the performance of its policymakers. India, too, has launched its own machine learning system NeVA, which at the moment is not fully implemented across the nation. How can we learn and adopt from the advancement in the Parliaments around the world? 

Why IISc wins?

IISc was selected as the world’s top research university, trumping some of the top Ivy League colleges in the QS World University Rankings 2022