Now Reading
Guide to Torchmeta- A Meta-Learning library for PyTorch

Guide to Torchmeta- A Meta-Learning library for PyTorch

Torchmeta - Meta Learning Library in PyTorch

Torchmeta is an open-source meta-learning library built on top of Pytorch deep learning framework. The objective of Torchmeta is to allow easy benchmarking and reproduce the existing pipelines/ research work in meta-learning and make it accessible to larger communities. Torchmeta was first presented in a research paper called Torchmeta- A meta-learning library for PyTorch. The authors are Tristan Deleu, Tobias Würfl,  Mandana Samiei,  Joseph Paul Cohen, Yoshua Bengio. This project is supported and tested by the Montreal Institute for Learning Algorithms(MILA).

Torchmeta is inspired by OpenAI Gym(archive), which helped Reinforcement Learning’s progress, with access to multiple environments under a unified interface. Torchmeta provides data-loaders for most of the standard datasets for few-shot classification and regression. It also includes extensions of PyTorch called meta-modules, to simplify the creation of models compatible with classic meta-learning algorithms that sometimes require higher-order differentiation. Torchmeta is fully compatible with torchvision and PyTorch’s DataLoader.

Deep Learning DevCon 2021 | 23-24th Sep | Register>>
Requirements & Installation
  • Python 3.6 or above
  • PyTorch 1.4 or above
  • Torchvision 0.5 or above

Install Torchmeta via pip.

!pip install torchmeta

DataLoaders for few-shot learning

Torchmeta automates the creation of each meta-training dataset. The data loaders in torchmeta are fully suitable with data components of PyTorch such as Dataset and DataLoader. This library provides a collection of datasets corresponding to classic few-shot classification and regression problems from the meta-learning literature.

Looking for a job change? Let us help you.

Few-shot Regression

Most of the few-shot regression problems are simple regression having a function(y=ax+b) to give out input values. Torchmeta provides an object called MetaDataset from which meta-training sets are being inherited. Each dataset(that is inherited) corresponds to a specific set of parameters for that specific function. We can then create the dataset by sampling all the known parameters in a particular range to feed it to the function.

This  library currently contains 3 toy problems:

A Simple regression task, based on sinusoids, is shown below. It is to instantiate the meta-training set for the sine waves problem

import torchmeta
torchmeta.toy.Sinusoid(num_samples_per_task=10, num_tasks=1000000, noise_std=None,
    transform=None, target_transform=None, dataset_transform=None)

You can check the full documentation here.

Few-shot Classification

For few-shot classification problems, the creation of each dataset follows two-step: First, N classes are sampled from a large collection of candidates and then k examples are chosen per class. These steps are automated by torchmeta under an object called CombinationMetaDataset(from MetaDataset).The library currently contains following few-shot image classification problems:

An example of how to instantiate the meta-training is shown below:

import torchmeta
dataset = torchmeta.datasets.MiniImagenet("data", num_classes_per_task=5, meta_train=True,
download=True)

Training and Testing datasets splits

It is important to divide the dataset into a training and testing set for evaluation and meta-optimization. One thing to ensure that these train sets and test sets should not contain common instances. For this, Torchmeta introduces a wrapper over the datasets called Splitter to split the dataset. Shown below is an example of splitting the dataset via Torchmet.

import torchmeta 
 
dataset = torchmeta.datasets.MiniImagenet("data", num_classes_per_task=5,
meta_train=True, download=True)
dataset = torchmeta.transforms.ClassSplitter(dataset, num_train_per_class=1,
num_test_per_class=15, shuffle=True)

Meta DataLoaders

The objects generated in Few-shot Regression & Classification can be iterated over to generate datasets. These datasets are PyTorch Dataset objects, and as such can be included as part of any standard data pipeline (combined with DataLoader). Most meta-learning algorithms operate better on batches of tasks. Torchmeta divides the dataset into batches with the help of MetaDataLoader and those batches can be iterated over. 

# Helper function,
dataset = torchmeta.datasets.helpers.miniimagenet("data", shots=1, ways=5,
meta_train=True, download=True)
dataloader = torchmeta.utils.data.BatchMetaDataLoader(dataset, batch_size=16)
for batch in dataloader:
  train_inputs, train_labels = batch["train"] # Size (16, 5, 3, 84, 84) & (16, 5)
  print('Train inputs shape: {0}'.format(train_inputs.shape))    # (16, 25, 1, 28, 28)
  print('Train targets shape: {0}'.format(train_labels.shape))  # (16, 25)
Advanced example of Torchmeta

In this part, we will add all of the sections discussed above in DataLoaders.

from torchmeta.datasets import Omniglot
from torchmeta.transforms import Categorical, ClassSplitter, Rotation
from torchvision.transforms import Compose, Resize, ToTensor
from torchmeta.utils.data import BatchMetaDataLoader
 
dataset = Omniglot("data",
                   # Number of ways
                   num_classes_per_task=5,
                   # Resize the images to 28x28 and converts them to PyTorch tensors (from Torchvision)
                   transform=Compose([Resize(28), ToTensor()]),
                   # Transform the labels to integers (e.g. ("Glagolitic/character01", "Sanskrit/character14", ...) to (0, 1, ...))
                   target_transform=Categorical(num_classes=5),
                   # Creates new virtual classes with rotated versions of the images (from Santoro et al., 2016)
                   class_augmentations=[Rotation([90, 180, 270])],
                   meta_train=True,
                   download=True)
#split the data into train and test
dataset = ClassSplitter(dataset, shuffle=True, num_train_per_class=5, num_test_per_class=15)
#creating batches from dataset
dataloader = BatchMetaDataLoader(dataset, batch_size=16, num_workers=4)
Meta-Learning Module

Models in PyTorch are created from basic components called modules and each basic module represents a layer in the neural network containing both the computational graph and its parameters. However, some meta-learning algorithms require high-order differentiation to update the parameters via backpropagation. Torchmeta also provides huge modules called MetaModules(similar to nn.module in PyTorch) for easy implementation of meta-learning algorithms and gives you an option to provide new parameters as an additional input. Metamodule treats these new parameters as a part of the computational graph and backpropagation works as expected. Point to be noted that ith no additional parameters, Torchmeta backpropagation works in a similar way to that of PyTorch with no additional parameters.

The figure below shows the MetaLinear module of Torchmeta with and without additional parameters. The first figure shows the initialization of the MetaLinear module. The second figure shows the MetaLinear module’s flow in a default manner and the third figure shows the flow of MetaLinear module with additional parameters.

Given below is the example of MetaModule(base class). These modules accept additional argument params in their forward method. The architecture of Neural Network via MetaModule is shown below.

#import the required libraries and Meta modules from torchmeta
import torch.nn as nn
from torchmeta.modules import (MetaModule, MetaSequential,
                               MetaConv2d, MetaLinear)
 
class Model(MetaModule):
    def __init__(self, in_channels, num_classes):
        super(Model, self).__init__()
        #MetaSequential is similar to nn.Sequential
        #A sequential container.
        #Modules will be added to it in the order they are passed in the constructor.
        #like in here MetaConv2D is passed as convulational layer and then a ReLU, MaxPool.
        self.features = MetaSequential(MetaConv2d(in_channels, 64, 3),
                                       nn.ReLU(),
                                       nn.MaxPool2d(2))
        #MetaLinear is similar to torch.nn.Linear
        #Applies a linear transformation to the incoming data
        self.classifier = MetaLinear(64, num_classes)
 
    def forward(self, inputs, params=None):
        features = self.features(inputs,
                                 params=self.get_subdict(params, 'features'))
        logits = self.classifier(features.view((inputs.size(0), -1)),
                                 params=self.get_subdict(params, 'classifier'))
        return logits
Conclusion

In this article, we have discussed Torchmeta and its parts like DataLoader, MetaModule. 

To learn more about Torchmeta, you can check the examples available in the repository of the project, as well as this implementation of MAML(MAML article) for a more detailed showcase of all the features of Torchmeta.

Official code, docs & Tutorials are available at:

You can check other articles related to Meta-Learning here.

What Do You Think?

Join Our Discord Server. Be part of an engaging online community. Join Here.


Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top