“87% of data science projects never make it into production.”
The machine learning community is now gearing up for a new challenge–deployment. But why so much fuss about something so obvious? You build something to deploy, right? Well, not really. Many machine learning models never see the light of the day. And, those that make into production make little noise. This is why the problems in production are pushed under the rug for a better part of last decade’s AI hype cycle. We read articles about state-of-the-art algorithms and Unicorn AI startups, but how well is ML being productionised? Organisations are finally coming to terms with these challenges. And, they found a hero in MLOps– a meticulous marriage of machine learning and software engineering. At the third edition of Rising organised by Analytics India Magazine, Hamsa Buvaraghan of Google Cloud gave the audience a glimpse of how MLOps power machine learning pipelines of the future. Hamsa leads Google Cloud’s Data Science and MLOps Solution team building revolutionary software solutions for business problems using Google’s Data Analytics and AI/ML products.
In her talk, she demonstrated how the solutions of MLOps fit right into the aspirations of automating the ML pipeline.
Sign up for your weekly dose of what's up in emerging technology.
- Organisations hardly make it beyond pilots and proofs of concept.
- 72% of organisations that began AI pilots couldn’t deploy even a single application in production.
- According to a recent survey, 55% of companies have not deployed an ML model.
- Models don’t make it into production, and if they do, they break.
- Teams do not have reusable or reproducible components, and their processes involve difficulties in handoffs between data scientists and IT.
- Deployment, scaling, and versioning efforts still create headaches.
Who needs MLOps
MLOps bridges the glaring gap between machine learning development and deployment, similar to how DevOps and DataOps support application engineering and data engineering. According to Google Cloud, successful deployments and effective operations are a bottleneck for getting value from AI. MLOps is an engineering culture and practice that aims at unifying ML system development(Dev) and ML System operations (Ops). Hamsa stressed that ML Code is a small portion. From configuration to monitoring, from serving infrastructure to resource management, building production-grade machine learning systems require much more than just code.
Building an ML-enabled system is a multifaceted undertaking that combines data engineering, ML engineering, and application engineering tasks. “It takes a village to put together an MLOps pipeline,” said Hamsa. For instance, some foundational capabilities are required to support any IT workload, such as a reliable, scalable, and secure compute infrastructure. MLOps capabilities include experimentation, data processing, model training, model evaluation, model serving, online experimentation, model monitoring, ML pipeline, and model registry. An ideal MLOps pipeline can take care of machine learning development, training operationalisation concerns, model deployment concerns, data & model management and a lot more. Hamsa believes organisations are now moving towards automated end-to-end pipelines, and MLOps will have applications in many industries. One of the most significant features of MLOps is its ability to tap into ML metadata and artifact repository along with dataset and feature repository. Artifacts can be anything: processed data splits, schemas, statistics, hyperparameters, models, or evaluation metrics of the model, to name a few.
“These ML pipelines are a combined effort of data scientists, data engineers, SREs and ML engineers.”
The principle Changing Anything Changes Everything or CACE, refers to the dependency on minor changes in a software engineering pipeline. In the context of machine learning, this principle extends to hyper-parameters, learning settings, sampling methods, convergence thresholds, data selection, and essentially every other possible tweak. Various ML artifacts are produced in different processes of the MLOps life cycle, including descriptive statistics and data schemas, trained models, and evaluation results. Added to this is metadata, which is the information about these artifacts.
Benefits of MLOps
- MLOps enables shortened development cycles.
- MlOps leads to an increase in reliability, performance, scalability, security of produced ML systems.
- MLOps allows the creation of the entire ML lifecycle.
- It also helps manage risk when organisations scale ML applications to more use cases in changing environments.
Model governance, version control, explainability and other nitty-gritty of deploying a machine learning model presents a nightmarish scenario to an ML practitioner oblivious of software engineering etiquettes. MLOps, with its myriad of options and a growing developer community, is the best possible solution today to tackle the realities of productionising a model.