On Friday, Jeremy Howard’s fast.ai announced the release of super productive libraries along with a very handy machine learning book and also a course. Fast.ai is popular deep learning that provides high-level components to obtain state-of-the-art results in standard deep learning domains. Fast.ai allows practitioners to experiment, mix and match to discover new approaches. In short, to facilitate hassle-free deep learning solutions.
The libraries leverage the dynamism of the underlying Python language and the flexibility of the PyTorch library.
Sign up for your weekly dose of what's up in emerging technology.
Now the latest version, ‘fastai v2’ is a complete rewrite of fastai which is faster, easier, and more flexible, implementing new approaches to deep learning framework design.
Overview Of The New Releases
Fast.ai makes it very easy to migrate from plain PyTorch, Ignite, or any other PyTorch-based library, or even to use fastai in conjunction with other libraries. Users can use fastai’s GPU-accelerated computer vision library, along with your own training loop. They can also pick and choose with mixup and cutout augmentation, a uniquely flexible GAN training framework, which isn’t available in any other framework.
According to fastai, the new version includes the following:
- A new type dispatch system for Python along with a semantic type hierarchy for tensors
- A GPU-optimised computer vision library which can be extended in pure Python
- An optimiser which refactors the common functionality of modern optimisers into two basic pieces, allowing optimisation algorithms to be implemented in 45 lines of code
- A novel 2-way callback system that can access any part of the data, model, or optimiser and change it at any point during training
- A new data block API
Now, let’s take a look at the foundation libraries used in fastai v2 :
Using fastgpu, one can check for scripts to run, and then run them on the first available GPU. If no GPUs are available, it waits. In case of multiple GPU availability, multiple scripts are run in parallel, one per GPU. This property allows researchers to run ablation studies by taking advantage of all GPUs with no parallel processing overhead or manual intervention.
Using fastcore one can add features such as multiple dispatches from Julia, mixins from Ruby and more to Python. fastcore also adds some “missing features” and cleans up some rough edges in the Python standard library, such as simplifying parallel processing and bringing ideas from NumPy over to Python’s list type.
Using argparse in Python creating a quick script for the command-line argument can be tedious. fastscript eliminates the use of boilerplate Python and runs like a command-line application.
Here’s a code snippet of fastscript in action:
from fastscript import *
def main(msg:Param("The message", str),
upper:Param("Convert to uppercase?", bool_arg)=False):
print(msg.upper() if upper else msg)
Now coming to the part where fastai allows newcomers to try computer vision, NLP, recommendation system algorithms with almost similar looking code under 5 lines. Let’s take a look at a few samples offered by fastai:
1| Computer vision classification
For this example, the Oxford-IIIT Pet Dataset was considered. This dataset contains 7,349 images of cats and dogs from 37 different breeds. Here, a pre-trained model trained on 1.3 million images will be fine-tuned to create a model that to classify dogs and cats.
path = untar_data(URLs.PETS)/'images'
def is_cat(x): return x.isupper()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
learn = cnn_learner(dls, resnet34, metrics=error_rate)
This example demonstrates how to train a model to identify user sentiment about a movie from a review.
dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')
learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
3| Build Recommendation systems
Using MovieLens dataset, this example shows how to build a recommender system that tells the kind of movies people might like, based on their previous viewing habits.
path = untar_data(URLs.ML_SAMPLE)
dls = CollabDataLoaders.from_csv(path/'ratings.csv')
learn = collab_learner(dls, y_range=(0.5,5.5))
learn.show_results() #this will display results
Know more about fastai v2 here.