Top MLOps Tool Repos On Github

MLOps has been introduced to provide an end-to-end machine learning development process to design, build and manage reproducible, testable, and evolvable ML-powered software. MLOps allowed organisations to collaborate across departments and accelerate workflows, which usually hit the wall due to various issues in the production. In the next section, we present top MLOps tool repos that are available on Github

(Image credits: Microsoft Azure)

Here are the top Github MLOps tools repos:

Seldon Core

Seldon core is an MLOps framework to package, deploy, monitor and manage thousands of production machine learning models. It converts machine learning models (built on Tensorflow, Pytorch, H2o, etc.) or language wrappers (built on Python, Java, etc.) into production microservices.

Seldon Core makes scaling to thousands of production machine learning models possible and provides advanced ML capabilities that include Advanced Metrics, Request Logging, Explainers, Outlier Detectors, A/B Tests, Canaries and more. It makes deployment easy through their pre-packaged inference servers and language wrappers.

Check the full repo here.


Polyaxon can be used for building, training, and monitoring large scale deep learning applications. This platform is built to address reproducibility, automation, and scalability for ML applications. Polyaxon can be deployed into any data centre, cloud provider, or can be hosted. It supports all the major deep learning frameworks such as Tensorflow, MXNet, Caffe, Torch, etc.

According to the team that developed Polyaxon, the platform makes it developing ML applications faster, easier, and more efficient by managing workloads with smart container and node management. It even turns GPU servers into shared, self-service resources for teams.

Installation: $ pip install -U polyaxon

Check the full repo here.

Hydrosphere Serving

Hydrosphere Serving offers deploying and versioning options for machine learning models in production. This MLOps platform:

  • Can serve machine learning models developed in any language or framework. It wraps them in a Docker image and deploys it onto the production cluster, exposing HTTP, gRPC and Kafka interfaces.
  • Shadows traffic between different model versions to examine how different model versions behave on the same traffic.
  • Versions control models and pipelines as they are deployed.

Check the full repo here.


Metaflow was originally developed at Netflix to address the needs of its data scientists who work on demanding real-life data science projects. Netflix open-sourced Metaflow in 2019.

Metaflow helps users design your workflow, run it at scale, and deploy it to production. It versions and tracks all your experiments and data automatically. Metaflow provides built-in integrations to storage, compute, and machine learning services in the AWS cloud. No code changes required.

Check the full repo here.


Kedro is an open-source Python framework which can be used for creating reproducible, maintainable and modular data science code. Kedro is built on the foundations of  software engineering and applies them to machine-learning code; applied concepts include modularity, separation of concerns and versioning.

Check the full repo here


As a flexible, high-performance framework, BentoML can be used for serving, managing, and deploying machine learning models. It does this by providing a standard interface for describing a prediction service and abstracting away how to run model inference efficiently and how model serving workloads can integrate with cloud infrastructures.

BentoML’s features include:

  • Production-ready online serving.
  • Supports multiple ML frameworks including PyTorch, TensorFlow.
  • Containerized model server for production deployment with Docker, Kubernetes etc.
  • Discover and package all dependencies automatically.
  • Serve any Python code along with trained models.
  • Health check endpoint and Prometheus /metrics endpoint for monitoring.

Check the full repo here.


Flyte provides production-grade, container-native, type-safe workflow platforms optimized for large scale processing. It is written in Golang and enables highly concurrent, scalable and maintainable workflows for machine learning and data processing. It connects disparate computation backends using a type safe data dependency graph and records all changes to a pipeline, making it possible to rewind time. 

Check the full repo here.

More Great AIM Stories

Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.

More Stories

Vijaysinh Lendave
A Beginner’s Guide to MLOps

With the fast development in the machine learning frameworks, comparative approaches are being created within the capacity of ML engineering, which handles the special complexity of the practical application of machine learning.

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM