PyTorch Releases Version 2 Of LightningCLI

PyTorch Lightning aims at becoming the simplest, most flexible framework for expediting any kind of deep learning research to production. 

PyTorch recently announced the release of LightningCLI v2 as part of the Lightning v1.5 release. PyTorch Lightning v1.5   comes with increased reliability to support the complex demands of the leading AI organizations and prestigious research labs that rely on Lightning to develop and deploy AI at scale.

PyTorch Lightning aims at becoming the simplest, most flexible framework for expediting any kind of deep learning research to production. 

Running non-trivial experiments often requires configuring many different trainer and model arguments such as learning rates, batch sizes, number of epochs, data paths, data splits, number of GPUs, etc. These need to be exposed in a training script as most experiments are launched from command-line.

Implementing these command-line tools using libraries such as Python’s standard library argparse to manage hundreds of possible trainer, data, and model configurations is a huge source of boilerplate.

Image Source: LightningCLI

It often leads to basic configurations being hard-coded and inaccessible for experimentation and reuse. Additionally, most of the configuration gets duplicated in the signature and argument defaults, as well as docstrings and argument help messages.

PyTorch’s LightningCLI exposes the arguments directly from the code classes or functions and generates help messages from their docstrings while performing type checking on instantiation. This means that the command-line interface adapts to your code instead of the other way around. 

Image Source: LightningCLI

The added support for configuration no longer leaks into your research code. The code becomes the source of truth and your configuration is always up to date. The full configuration is automatically saved after each run. This greatly simplifies the reproducibility of experiments which is critical for machine learning research.

The new version also includes added support for all other Trainer entry points. Developers can choose which one to run by specifying it as a subcommand. A new notation to easily instantiate objects directly from the command line has also been included in this update. This dramatically improves the command line experience as one can customise almost any aspect of your training by referencing only class names.

Optimizers and learning rate schedulers are also configurable. All of PyTorch’s optimizers and learning rate schedulers (under torch.optim) are supported out-of-the-box. This allows one to quickly experiment without having to add support to each optimizer class in your LightningModule.configure_optimizers()method.

Lightning also exposes several registries for you to store your Lightning components via a decorator mechanism. This is supported for Callback, optimizer, lr_scheduler, LightningModule, and LightningDataModule.

Image Source: LightningCLI

This is particularly interesting for library authors who want to provide their users with a range of models and data modules to choose from.

The New Lightning Update seems to provide the best experience possible to anyone doing optimisation with PyTorch and with the PyTorch Lightning API being already stable, breaking changes will be minimal.

More Great AIM Stories

Victor Dey
Victor is an aspiring Data Scientist & is a Master of Science in Data Science & Big Data Analytics. He is a Researcher, a Data Science Influencer and also an Ex-University Football Player. A keen learner of new developments in Data Science and Artificial Intelligence, he is committed to growing the Data Science community.

More Stories

MORE FROM AIM
Yugesh Verma
10 real-life applications of Genetic Optimization

Genetic algorithms have a variety of applications, and one of the basic applications of genetic algorithms can be the optimization of problems and solutions. We use optimization for finding the best solution to any problem. Optimization using genetic algorithms can be considered genetic optimization

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM