Built by developers of Google, IBM, Cisco, among others, Kubeflow is an open-source machine learning toolkit for Kubernetes. Kubeflow is built on Kubernetes as a system for deploying, scaling as well as managing complex systems. Initially, Kubeflow started to work as a simpler way to run TensorFlow works on Kubernetes, which was based on a pipeline known as TensorFlow Extended and then it expanded to be a multi-architecture, multi-cloud framework for running entire machine learning pipelines.
According to the developers, the reason behind developing this platform is due to building and deploying real-world machine learning applications is hard and costly because of the lack of tooling, which covers end-to-end machine learning development and deployment process. Building a high-performance machine learning model includes a number of critical components in order to deliver an end-to-end machine learning product. Kubeflow is a platform for building and deploying such machine learning models without much hassle.
The basic workflow of Kuberflow is mentioned below:
- Download and run the Kubeflow deployment binary.
- Customise the resulting configuration files.
- Run the specified script to deploy containers to a specific environment.
Kubeflow helps in scaling machine learning models and deploying them to production as needed. It offers several components that one can use to build machine learning training, hyperparameter tuning, and serving workloads across multiple platforms. There are several features of this platform, which are mentioned below:
- Kubeflow provides scaling based on demand
- It helps in deploying and managing loosely-coupled microservices
- It provides easy, repeatable as well as portable deployments on diverse infrastructure
Here are some of the critical use cases of Kubeflow mentioned below
- Deploying and managing a complex ML system at scale: With Kubeflow, one can manage the entire AI organisation at scale. Kubeflow’s core and ecosystem critical user journeys (CUJs) provide software solutions for end-to-end workflows, which means one can easily build, train, deploy, develop a model, create, run, and explore a pipeline.
- Experimentation with training an ML model: Kubeflow 1.0 provides stable software sub-systems for model training including Jupyter notebooks, popular machine learning training operators such as Tensorflow and Pytorch that run efficiently and securely in Kubernetes isolated namespaces.
- End to end hybrid and multi-cloud ML workloads: Kubrflow is supported by all major cloud providers and available for on-premises installation, which fulfils the requirement of developing machine learning models in a hybrid as well as with multi-cloud portability.
- Tuning the model hyperparameters during training: Tuning hyperparameters is critical for model performance and accuracy. With the help of Kubeflow’s hyperparameter tuner (Katib), model hyperparameters tuning can be easily done in an automated way. This automation not only lessens the computation time but also speeds up the delivery of improved models.
- Continuous integration and deployment (CI/CD) for ML: Although, Kubeflow currently does not have a dedicated tool for CI/CD yet Kubeflow Pipelines can be used to create reproducible workflows. These workflows automate the steps needed to build an ML workflow, which delivers consistency, saves iteration time, and helps in debugging, compliance requirements and more.
Who Uses It
This platform is meant for the data scientists who want to build and experiment with ML pipelines. Kubeflow is also for ML engineers and operational teams who want to deploy ML systems to various environments for development, testing, and production-level serving.
Kubeflow does a major release at the end of every quarter while the minor releases occur as and when needed for fixing important bugs. Last year, Kubeflow version 0.6 was released, which introduced several intuitive features and enhancements like artefact tracking, data versioning in Istio-based multi-user environments and other such.