MITB Banner

Plug-and-play ML models across platforms using MLCube

MLCube is an interface which facilitates easy model sharing across platforms and account for increased model usage across platforms.
Share
Listen to this story

A variety of Machine Learning and AI models face the issues of portability and usage on various platforms. Some standard platforms like cloud, Kubernetes, GCP, and many times in localhost, the models developed cannot be deployed and used due to dependency issues. So this is where MLCube as a package is useful for sharing the models across platforms in the form of a simple downloadable and executable package. In this article, we will focus on MLCube and we will understand its importance in this context.

Table of Contents

  1. Introduction to MLCube
  2. MLCube on different platforms
  3. Case Study of MLCube in Docker
  4. Summary

Introduction to MLCube

Machine learning models are developed for various tasks, but sharing the model is sometimes difficult as various platforms would have certain dependency issues. So MLCube is a single shot library that facilitates machine learning researchers and engineers to use models developed from anywhere in the world on their platforms. 

Are you looking for a complete repository of Python libraries used in data science, check out here.

MLCube basically operates on the principle of “One model run anywhere”. So using MLCube we can run the trained models on various platforms like your local machines, on cloud-based platforms like  GCP, or even on Kubernetes clusters if required. MLCube supports various platforms in the form of runners and some of the runners include dockers, Kubeflow, SSH, and many more. MLCube can be installed in required formats by using simple pip statements for respective formats. On the whole MLCube acts like a shipping container of ML models that can be used across diverse platforms to share and utilize trained models.

MLCube has the flexibility to share the trained ML models to various platforms. The sharing of the models on various platforms is handled through “runners”. So in the next section of the article let us try to understand the different MLCube runners.

MLCube on different platforms

The MLCube framework is still in the development phase and currently, it supports 6 different types of runners. These runners can be utilized accordingly to the user’s choice of the working environment and in the platform, the machine learning engineer or researcher wants to train the model or requires the model to be shared across. Now let us look into the different types of MLCube runners that are made available in the development phase and try to understand the characteristics of each runner.

MLCube in Docker

The MLCube package in docker can be used by first installing the MLCube package that is designed for the Docker environment. That can be done through a pip command as shown below.

!pip install mlcube-docker

So once the MLCube package is installed in the docker environment the MLCube can be made to run in the docker environment using the mandatory commands. Some of the standard docker supporting commands include configure and run. A trained model can be made available in the docker environment and the model can be used accordingly.

MLCube in GCP

The MLCube package in GCP can be used by first installing the MLCube package that is designed for the GCP platform. That can be done through a pip command as shown below.

!pip install mlcube-gcp

So once the MLCube package is installed in the GCP platform the MLCube package can be made to run on the GCP platform. In GCP the configuration file parameters and some of the standard instances of the GCP like Compute Engine, and VM instances have to be activated accordingly and the MLCube library in GCP can be used accordingly to use the shared model or to retrain the shared model in the platform.

MLCube in Kubernetes

The MLCube package for Kubernetes can be used by first installing the MLCube package that is designed for Kubernetes. That can be done through a pip command as shown below.

!pip install mlcube-k8s

So once the MLCube package is installed in Kubernetes, the shared model and the Kubernetes platform can be used to accelerate the model training. Some of the standard functionalities like run commands in the Kubernetes platform can be used to activate the Job manifest. The runner in the Kubernetes platform would then create a cluster to train and use the shared model in Kubernetes. So the job has to be instantiated in Kubernetes according to MLCube requirements and once the training process of the model is completed, the job gets completed. This is how MLCube is used in Kubernetes.

MLCube in Kubeflow

The MLCube package for the Kubeflow platform can be used by first installing the MLCube package that is designed for the Kubeflow platform. That can be done through a pip command as shown below.

!pip install mlcube-kubeflow

MLCube in Kubeflow is still in development and needs a little improvement. Some of the basic commands supported are run and configure along with some standard arguments like platform and task. The PVC redirection to the MLCube workspace and kubeflow pipelining has to be completed to make complete usage of the MLCube library in Kubeflow.

MLCube in Singularity

The MLCube package for the Singularity platform can be used by first installing the MLCube package that is designed for the Singularity platform. That can be done through a pip command as shown below.

!pip install mlcube-singularity

The mandatory and standard commands supported by Singularity for MLCubes is similar to Kubeflow. Some of the standard runners designed in the development stage of MLCube include singularity, run {volumes} and {task args}. So these runners can be used accordingly in the Singularity working environment and instantiate MLCube in the working environment to train or use the shared model.

MLCube in Secure Shell (ssh)

The MLCube package for Secure Shell (ssh) can be used by first installing the MLCube package that is designed for the ssh. That can be done through a pip command as shown below.

!pip install mlcube-ssh

Some of the basic commands that have to be used in Secure Shell are ssh and rsync to activate the MLCube library in the secure shell working environment. The models shared can be made available in the secure shell working environment and the models can be used or trained accordingly by overriding certain ssh command line arguments. The rsync command has to be used efficiently to instantiate and to synchronize the training and running process of the model in the secure shell (ssh) environment.

Case Study of MLCube in Docker

In the development phase of MLCube, there are 4 use cases mentioned in the official Github repository.  Some of the use cases of MLCube include MNIST data-based model training and sharing, A simple hello world program, A example of using Electron Microscopy (EMDenoise) dataset, and a simple program to perform matrix multiplication famously known as matmul.

In this article let us try to understand how MLCube is used for the MNIST data and train it in the docker platform. Consider that you are working in the docker terminal.

Step-1: Create a python environment in the Docker platform

Let us create a virtual python environment in the docker platform and activate the virtual environment created using the below lines of code.

# Create a python virtual environment
virtualenv -p python3 ./env && source ./env/bin/activate

Step-2: Installing the MLCube docker package

Let us install the MLCube docker page in the Docker environment by using the pip command as shown below.

pip install mlcube mlcube-docker

Step-3: Check for docker runners

Once the MLCube docker library is installed in the Docker environment we have to check for all appropriate runners installed for Docker by using the below code.

mlcube config --get runners

Step-4: Check for platform configuration

Once the MLcube docker library is installed in the Docker environment we have to check for platform configurations with respect to MLCube requirements by using the below code.

mlcube config --get platforms

Step-5: Cloning to MLCube examples Github repository

As the MLCube library is still in the development stage currently we can only clone into the example repository of Github. So let us clone into the example repository of MLCube and we have to redirect to the cloned directory. We can use the below code to do the same.

git clone 'https://github.com/mlcommons/mlcube_examples.git' && cd './mlcube_examples/mnist'

Step-6: Visualizing the overview of the MLCube docker version

We have to visualize the overview of the docker version of MLCube to interpret the successful cloning into the Github repository and to validate install of all prerequisites.

mlcube describe --mlcube

Step-7: Resolving MLCube configuration for Docker

We have to validate and resolve MLCube library configurations for the Docker platform using the below code.

mlcube show_config --resolve --mlcube . --platform docker

Step-8: Downloading MNIST data from MLCube

The MNIST data has to be downloaded into the Docker platform using the below code.

mlcube run --mlcube . --task download --platform docker

Step-9: Training the MLCube model in the Docker platform

Now let us train the model for MNIST data in the docker platform using the below code.

mlcube run --mlcube . --task train --platform docker

So this is how MLCube has to be used in the Docker platform to make use of the MNIST data and use the trained model from the MLCube library in the Docker platform.

Summary

MLCube is a single shot framework that is used to increase the availability of models across platforms. It is still in the development stage and currently, it has the ability to function across 6 platforms flawlessly with the shared models. Through MLCube a single model can be shared anywhere in the world across different platforms right from localhost platforms, cloud-based platforms to Kubernetes clusters and Dockers. Due to MLCube, we can ensure more models active and in action for the desired tasks across various platforms as it is a simple plug-and-play library to share and use models across platforms.

References

PS: The story was written using a keyboard.
Share
Picture of Darshan M

Darshan M

Darshan is a Master's degree holder in Data Science and Machine Learning and an everyday learner of the latest trends in Data Science and Machine Learning. He is always interested to learn new things with keen interest and implementing the same and curating rich content for Data Science, Machine Learning,NLP and AI
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India