PyTorch recently announced the release of LightningCLI v2 as part of the Lightning v1.5 release. PyTorch Lightning v1.5 comes with increased reliability to support the complex demands of the leading AI organizations and prestigious research labs that rely on Lightning to develop and deploy AI at scale.
PyTorch Lightning aims at becoming the simplest, most flexible framework for expediting any kind of deep learning research to production.
Sign up for your weekly dose of what's up in emerging technology.
Running non-trivial experiments often requires configuring many different trainer and model arguments such as learning rates, batch sizes, number of epochs, data paths, data splits, number of GPUs, etc. These need to be exposed in a training script as most experiments are launched from command-line.
Implementing these command-line tools using libraries such as Python’s standard library argparse to manage hundreds of possible trainer, data, and model configurations is a huge source of boilerplate.
It often leads to basic configurations being hard-coded and inaccessible for experimentation and reuse. Additionally, most of the configuration gets duplicated in the signature and argument defaults, as well as docstrings and argument help messages.
PyTorch’s LightningCLI exposes the arguments directly from the code classes or functions and generates help messages from their docstrings while performing type checking on instantiation. This means that the command-line interface adapts to your code instead of the other way around.
The added support for configuration no longer leaks into your research code. The code becomes the source of truth and your configuration is always up to date. The full configuration is automatically saved after each run. This greatly simplifies the reproducibility of experiments which is critical for machine learning research.
The new version also includes added support for all other Trainer entry points. Developers can choose which one to run by specifying it as a subcommand. A new notation to easily instantiate objects directly from the command line has also been included in this update. This dramatically improves the command line experience as one can customise almost any aspect of your training by referencing only class names.
Optimizers and learning rate schedulers are also configurable. All of PyTorch’s optimizers and learning rate schedulers (under torch.optim) are supported out-of-the-box. This allows one to quickly experiment without having to add support to each optimizer class in your LightningModule.configure_optimizers()method.
Lightning also exposes several registries for you to store your Lightning components via a decorator mechanism. This is supported for Callback, optimizer, lr_scheduler, LightningModule, and LightningDataModule.
This is particularly interesting for library authors who want to provide their users with a range of models and data modules to choose from.
The New Lightning Update seems to provide the best experience possible to anyone doing optimisation with PyTorch and with the PyTorch Lightning API being already stable, breaking changes will be minimal.