Recently, Uber open-sourced Neuropod, a library that provides a uniform interface to run deep learning models from multiple frameworks in C++ and Python. This library is Uber ATG’s open-sourced deep learning inference engine that makes deep learning frameworks look the same when running a model.
With the advancements of various frameworks, the technique like deep learning has been witnessing exponential growth. In one of our articles, we talked about how Uber for the first time ever had opened up about what is going on under the hood of their Advanced Technologies Group’s (ATG’s) machine learning infrastructure and version control platform for autonomous driving vehicles.
Sign up for your weekly dose of what's up in emerging technology.
In March, the ride-hailing company announced the development of a set of tools and microservices to support the ML workflow known as VerCD and to the latest, it open-sources its Neuropod library.
With the evolving self-driving technology, the researchers at Uber always look for experimenting with various deep learning frameworks that help in improving the models. Uber ATG leverages deep learning techniques to provide safe and reliable self-driving technology by building and training models that can handle tasks such as processing sensor input, identifying objects, and predicting where those objects might go.
Over the last few years, ATG used a number of popular deep learning frameworks that include Caffe2, TensorFlow, PyTorch, among others to develop models. In this process, the researchers witnessed that this consumes not only a lot of human resources but also memory corruption problems when running adjacent with TensorFlow and also includes various issues that were very difficult to debug by the researchers.
According to the researchers, adding support for a new deep learning framework across an entire machine learning stack is costly and time-intensive. Vivek Panyam, a Senior Autonomy Engineer at Uber ATG, stated that even if it’s simple for a researcher to explore with new frameworks, adding production support for a new deep learning framework throughout all the systems and processes is a substantial task. Also, part of what makes this task so difficult is that it requires integration as well as optimisation work for each piece of the infrastructure and tooling.”
Also, to make productionisation easier, the researchers wanted to be able to easily swap out models that solve the same problem, even if they were implemented in different frameworks. However, issues like memory corruption, dependency conflicts, among others, have caused to spend significant effort working through integration issues instead of being able to focus on model development.
This is where Neuropod comes into play. This library works as an abstraction layer on top of existing deep learning frameworks that provides a uniform interface to run DL models. Neuropod is a way to maximise flexibility during research without having to redo work during other parts of the process.
According to the developers at Uber, this library makes it easy for researchers to build models in a framework of their choosing while also simplifying the productionisation of these models. Vivek stated that Neuropod starts with the concept of a problem definition, which is a formal description of a “problem” for models to solve.
Neuropod uses Bazel as a build system. Currently, the library supports Python and C++ and several frameworks including TensorFlow, PyTorch, Keras, and TorchScript.
Every Neuropod model implements a problem definition. As a result, any models that solve the same problem are interchangeable, even if they use different frameworks. The library works by wrapping existing models in a neuropod package (or “a neuropod” for short), and the package contains the original model along with metadata, test data, including custom ops.
Benefits of Neuropod
With the Neuropod library, any model can be executed from any supported language. For instance, if a user wants to run a PyTorch model using C++, Neuropod will spin up a Python interpreter under the hood and communicate with it to run the model.
Some of the benefits of using this library are mentioned below: –
- With Neuropod, a user can run models from any supported framework using one API
- You can easily switch between deep learning frameworks if necessary without changing runtime code
- In this library, all of your inference code is framework agnostic
- Any Neuropod model can be run from both C++ and Python
- This library helps you to focus more on the problem you’re solving rather than the framework you’re using to solve it
- You can easily swap out models that solve the same problem at runtime with no code change
- Run experiments in a faster manner
- Using this library, you can switch from running in-process to running out-of-process with just a single line of code.
According to the developer at Uber, Neuropod has been very useful across a variety of deep learning teams at Uber, and over the last year, the ride-hailing company has deployed a number of Neuropod models across Uber ATG, Uber AI as well as the core Uber business. These include models for demand forecasting, estimated time of arrival (ETA) prediction for rides, menu transcription for Uber Eats, and object detection models for self-driving vehicles.