MITB Banner

Top 5 Announcements At PyTorch Developer Day Conference

While PyTorch Live remained the biggest announcement of the PyTorch Developer Day Conference 2021, there are many important highlights.

For developers and users to discuss core technical developments, ideas, and roadmaps, PyTorch designed the Developer Day Conference – virtually for 2021. Day 1 was live and later uploaded on Twitter, LinkedIn and Facebook and had over 25,000 views. From demos and learning to PyTorch’s achievements of 2021, a lot was discussed throughout the day.

Here are some highlights of what new and interesting Meta’s learning-research lab is coming up with: 

Launch of PyTorch Live 

The big announcement was the launch of PyTorch Live. It is a set of tools to build mobile AI-powered experiences easier. These tools support products to build on both iOS and Android platforms. There will be no need to write the same in two different languages; it uses JavaScript unified language to write apps for both platforms.

To achieve this, PyTorch Live is powered by two successful open-source projects: PyTorch Mobile powers the on-device inference for PyTorch Live and Reactive Native is a library for building visual interactive UI.

The three highlights of PyTorch Live include CLI, which enables developers to quickly set up a mobile deployment environment and bootstrap mobile app projects. Second is the Data Processing API that helps prepare and integrate custom models to be used with the PyTorch Live API. And the last is Cross Platform Apps that allow building mobile AI-powered apps for Android and iOS using the PyTorch Live API.

Release of PyTorch Profiler 1.9 and 1.10 

The PyTorch profiler collects a performance matrix during training and inferencing and provides actionable guidance to optimize the PyTorch model performance. The version PyTorch Profiler 1.9 has new features: distributed training view, memory view, GPU utilization, cloud storage support, and visual studio code. One can jump directly to source code. The newly added features of version 1.10 are Forward/Backward Correlation, Enhanced Memory view, recommendation enhancement, Gloo support and Tensorcore support.

Introduction to TorchBench

TorchBench is a new benchmark suite that has been released in open source. It captures models the way users use them and is focused on the researcher use case. The purpose is to improve the framework and not just individual models. This required bringing together all the scores of different models and impacting this across a very large suite. For TorchBench to be successful, diversity is the key. This is tricky as there is a large variety of models, and it is unclear what to benchmark. They will bring together a lot of models and combine them based on the distribution in the image below:  

Source: PyTorch Developers Day 1, broadcast presentation

Investments in Ecosystem

According to Dwarak Rajagopal, Engineering Director, Meta AI, “Community and Ecosystem is an important factor for sustaining both fast research innovation and hyper-growth in production. To grow the ecosystem 10X more, it is important to build extension points for the ecosystem.” There are three levels of extension that enable extending PyTorch to several ecosystem components.

To grow the ecosystem and community and be able to innovate at the API level, PyTorch has planned to increase its investment in the following: 

Core Authoring Ecosystem

To take it to the next level, PyTorch plans to make the core front-end language even more extensible to authoring innovators. For this, PyTorch is building an Extensible Dispatch Subclassing, which includes making the dispatch system itself a tool for building a more extensible front-end. They are also working on an ability to override autograd-like compute APIs (eg. Vmap, quantization, etc.), and not just new operators or primitives. The new abilities planned also provide a scope to extend behaviour in C++ or Python via callbacks and subclassing, whatever fits the use case best.

Implementations Ecosystem

PyTorch plans to build richer batteries, including a foundation for the OSS library and a framework for authors. For this, PyTorch has doubled its investment in supporting domain ecosystem libraries – providing hardened table stakes primitives and common extensibility foundations for OSS authors and maintainers. DataLoader v2 and TorchData are the built-in tools for accelerated data loading and dataset authoring. With the new PyTorch Profiler and TorchBench, they are providing a better debugging and performance measurement experience.

Execution Ecosystem

PyTorch is building more extensibility and hooks for execution and productionisation partners. Within the execution ecosystem, on the Training and E2E workflows, they have built – TorchX, which is a job authority with built-in components for running job/workflows on schedulers, pipelines, etc. On the Program Transformation side, they have built torch.fx – toolkit to create composable transformations for customer compilation, execution engines, profiling, and more. They also have PyTorch Mobile to run models on edge devices and TorchServe to provide out-of-the-box models serving to integrate on your cloud provider of choice. With torch.package + Torch::deploy, they are making it easier to run inference efficiently in a multithread environment for production model serving.

Brewing in Prototypes

A lot of PyTorch functionality is currently in the works. It is not ready yet, but is available in a repository in prototype features. Some of them include FunTorch, which brings composable differentiations and backend transformations. Also known as vmap, it allows one to efficiently compute for simple gradience. The team continues to improve framework extension points. It can hard blend backends and all types of repositories. Lazy Tensor Core provides an ability to experiment on new Tensorblade extensions completely on the Python side. On the distributer side, sharding Tensor support and general model primitives are currently in the works. The data project aims to make DataLoader more modular, extensible and performable.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Meeta Ramnani

Meeta Ramnani

Meeta’s interest lies in finding out real practical applications of technology. At AIM, she writes stories that question the new inventions and the need to develop them. She believes that technology has and will continue to change the world very fast and that it is no more ‘cool’ to be ‘old-school’. If people don’t update themselves with the technology, they will surely be left behind.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories