MITB Banner

Why Push-Button Productisation of AI Models Is A Farce Idea?

Data scientists can deploy a new model in less than half an hour, compared to days or weeks without MLOps.

Share

Why Push-Button Productisation of AI Models Is A Farce Idea?

Last month, Intel IT announced that it achieved ‘push-button productisation of AI Models,’ which helped deploy AI faster and at scale. But, the question is how feasible this idea is or its impact on machine learning models deployed. 

Businesses globally have benefited from ML’s automation, including product recommendations, fraud detection, determining credit worthiness, targeted emails, etc. However, the reality is quite the opposite – many ML projects fail before they ever see the light of day. 

According to Garner, only 53 per cent of projects make it from prototype to production – and that is at organisations with some level of AI experience. For companies that are still working to develop a data-driven culture, that number is likely higher, with some failure-rate estimates soaring to nearly 90 per cent. As per the McKinsey report, out of 160 companies, 88 per cent did not progress beyond the experimental stage. 

The Struggle is Real 

Productising AI models require substantial time and effort. In most cases, a model may never make it to production. Intel believes that creating custom ML applications is a primary hurdle to the productisation of ML models. 

It is often tempting to build a dedicated wrapper application for each model using modern platforms, tools and open-source code. With this approach, each AI model is built as a small, isolated application that takes care of data extraction and preparation. Also, it exposes the model’s results for consumption through a web service. 

It can take close to two to three days or weeks to develop a wrapper application and an additional week to test so that a model can be productised in a few weeks. Therefore, the wrapper approach is not scalable as it can easily lead to a chaotic situation of hundreds of unmanaged AI applications.

Thanks to the push-button productisation technique, Intel IT was able to achieve AI at scale. But it still seems like a far-fetched reality for many companies, as they are succumbed to creating custom ML applications. 

Inside Intel IT 

Today, Intel IT’s AI team works across Intel to transform critical work, optimise processes, eliminate scalability bottlenecks and generate significant business value across business processes, including sales, marketing, product design, manufacturing, performance, and pricing. The AI group consists of over 200 data scientists, machine learning (ML) engineers, and AI product experts. 

In the last ten years, Intel IT claimed that it has deployed over 500 machine learning models to production – more than 100 models were deployed just during the last one year. 

Here’s how

To do so, Intel IT developed Microraptor, a set of machine-learning operations (MLOps) capabilities. For those unaware, MLOps is a practice of efficiently developing, testing and maintaining machine learning in production. It automates and monitors the entire machine learning lifecycle and enables collaboration across teams, resulting in faster time to production and reproducible results. 

Intel IT developed an AI productisation platform for each business domain they work with, including sales, manufacturing, operations, etc., to enable MLOps. In addition, the team said that its MLOps capabilities are reused in all of its AI platforms, where all of its models and AI services are delivered, deployed, managed, and maintained. 

“Our approach to model productisation avoids the typical logistical hurdles that often prevent other companies’ AI projects from reaching production,” said the Intel IT team, stating that it enabled them to deploy AI models to production at scale through CI/CD, automation, reuse of building blocks and business process integration.  

Here are some of the advantages of Intel IT’s MLOps methodology 

  • The AI platform’s business process integration and abstract deployment details help data scientists concentrate on model development.
  • Data scientists can deploy a new model in less than half an hour, compared to days or weeks without MLOps
  • The systematic quality metrics minimise the cost and effort required to maintain the hundreds of models in production.

What is Push-Button Productionisation? 

Push-button productisation is a method of building AI platforms for deploying ML models to production at scale. It includes a fully automated continuous delivery process and systematic measures to minimise the cost and effort required to sustain/build the models in production. 

This technique offers a good separation of concerns as data scientists do not have to worry about engineering. In other words, a data scientist can push the model, which is code, while complying with some standards. This code will be built, tested, deployed and activated in an AI platform with all the integration hooks to the business domain.

Introducing Microraptor 

Intel’s Microraptor leverages many open-source platforms to enable the full MLOps lifecycle while abstracting the complexity of these platforms from the data scientists. Meaning, data scientists do not have to know anything about Kubernetes or Elasticsearch. Instead, they can focus on developing the best ML model. 

Once the model is complete, a data scientist can register the model to MLflow, an open-source platform for managing ML lifecycle, while complying with basic coding standards. So, everything – from building to testing to deploying – happens automatically. The model is initially deployed as a release candidate, which can be activated with another button push into the relevant business domain’s AI platform. 

Wrapping up 

Deploying machine learning at scale is one of the most challenging and rewarding experiences for data scientists. With the right mix of strategy, process and technology, machine learning projects can deliver competitive advantages and fuel growth across industries. 

The idea of push-button productisation may help the teams to deploy machine learning models faster. Still, some would argue that it is equally important to know the nitty-gritty of the platforms used to deploy machine learning models. 

Share
Picture of Amit Raja Naik

Amit Raja Naik

Amit Raja Naik is a seasoned technology journalist who covers everything from data science to machine learning and artificial intelligence for Analytics India Magazine, where he examines the trends, challenges, ideas, and transformations across the industry.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.