Modern large-scale automation systems integrate thousands to hundreds of data points. Demands for more flexible reconfiguration of production systems and optimisation across different information models, standards and legacy systems challenge current system interoperability concepts. According to experts, this has become an increasingly important problem that needs to be addressed to fulfil these demands in a cost-efficient manner under constraints of human capacity and resources concerning timing requirements and system complexity.
AI is used in multiple ways from providing intelligent recommendations for shopping to detecting harmful content to translating text and generating automated captions. The “behind the scenes” version of these applications typically looks like a config file going in and a trained model coming out. A config file or configuration files consist of parameters and initial settings used for user applications, server processes, and operating system settings.
Establishing interoperability within an organisation is a huge challenge. One such organisation which is facing this challenge is Facebook. So, the engineering team at Facebook AI decided to adopt a modularised approach to this problem.
Facebook AI is reengineering its platforms to be more modular and interoperable. The reusable components are split into standalone libraries, and backwards-compatible entry points are provided for end-to-end training from configs.
However, this whole pipeline of config in-model out relies on custom abstractions that, according to the engineers, limit their flexibility and reusability in other projects. This is also not helpful for those researchers who would like to experiment with different platforms and extend them to novel use cases.
This ambitious reengineering project’s whole objective is to make models that are originally made for one platform be usable for others. This cross-pollination, says the team, between research communities, which requires aligning to a shared, general API and electing to share as many components as possible, is a good thing. They have likened this idea of cross-pollination with the way techniques in NLP are transferred to computer vision and vice versa.
Modularisation: A FAIR Solution
The training loop and the config system were offloaded to Facebook’s own PyTorch Lightning and Hydra, respectively. Let’s take a look at how Facebook is using a few of its own tools to take care of modularity and interoperability:
The engineering team split the reusable components into standalone libraries for end-to-end training from configs. To handle configs, they leveraged the open-source Hydra framework. For organising components and managing the training loop, they are offering integration with PyTorch Lightning, a lightweight and open-source Python library.
Download our Mobile App
Hydra framework lets users compose and override configurations in a type-safe way and offers abstractions for launching to different clusters and running sweeps and hyperparameter optimisation without changes to the application’s code. According to FB, Hydra reduces the need for boilerplate code and allows researchers and engineers to focus on what really matters.
PyTorch Lightning allows for increased standardisation and automation. The latest version reduces the cognitive overhead when switching between platforms while encouraging modularity. The team at Facebook AI believes that with something like PyTorch Lightning, sharing common functionality amongst tools becomes easier. Especially functionalities such as checkpointing, quantisation or scripting. Added to this, users will also retain the option to have custom training loops for specific use cases. Going forward, Facebook will be implementing these design ideas with platforms like ClassyVision, PyText and fairseq.
For example, Classy Vision is a PyTorch-based framework for large-scale training of state-of-the-art image and video classification models. The libraries are modular and flexible. This framework eliminates the need to build custom frameworks, which in production settings, leads to duplicative efforts, and requires users to migrate research between frameworks and relearn the minutiae of efficient distributed training and data loading. Know more here.
By modularising the platform into two parts — a library and multiple entry points, Facebook is enabling its clients to access their platform in a much more productive way. They can now integrate platforms with code coming from other libraries in their projects and make interactive development in notebooks much more natural.
Back when each new deep learning project required significant engineering investment, Facebook had already started building tools on top of LUA Torch and Caffe2. Having an API built around providing a single config file to produce a trained model without requiring users to write any code made sense back then. It saved time as well as made the tool accessible to all.
But things have changed. What the AI teams can accomplish today couldn’t be done on yesteryear’s platforms. Thanks to PyTorch, tools such as FairSeq, PyText, and ClassyVision, allow users to build and train advanced models that were just not doable a few years ago on older platforms.
The single entry point in platforms discourages the ML engineer community from building modular and reusable components. Because, even for the experts, it can be hard to onboard to a new platform. The names and locations of common components often vary widely, and different platforms end up with redundancies. Importing components becomes impossible when different frameworks rely on different config systems. For these reasons, Facebook suggests that the time is right to bring about the next generation of content understanding platforms and encouraging cross-pollination.