Now Reading
Step By Step Guide To Building ML Model Registry

Step By Step Guide To Building ML Model Registry

Model Registry in MLOps offers a collaborative hub to work together on various stages of the ML lifecycle including managing multiple model artefacts, tracking and governing models. Additionally, it is a place to find, publish, and use ML models. With a centralized tracking system for trained ML models, the registry stores model lineage, versioning, and additional configuration information.

Model registry key features:

  • Central Repository: It allows the user to manage all their models in one place and share the information with team members.
  • Model Versioning: Helps in automatically keeping track of the versions, including data usage, hyperparameters, or the user’s created model. 
  • The registry notifies the user for every event in the ML lifecycle.


User-friendly model registry platforms 

Microsoft’s Azure and MLFlow are user friendly tools for model registry. Here’s a step-by-step guide on how to create a model registry on these platforms.

  1. Azure machine learning model

Microsoft’s Azure ML is a cloud-based platform for training, deploying, automating, managing, and monitoring ML experiments. Its reusable pipelines allow for easy model training, data preparation, and deployment of the ML models. It also stores the metadata associated with the model- factors including model description, tags, framework, where it is being deployed and if the deployments are healthy etc. For instance, tags allow the user to categorise the models and apply filters while listing them. 

The Azure ML Workflow: 

  1. Register the model

For starters, two components are required:- resources representing the model being deployed and a code running in the service & executing the model on a given input. 

The service works by uploading the registered model to the cloud and mounting it to the running web service. There are two ways to register the model. 

Register the model from a local site:

!wget -o model.onnx

!az ml model register -n bidaf_onnx -p ./model.onnx

Register the model from ML training run: 

az ml model register -n bidaf_onnx –asset-path outputs/model.onnx –experiment-name myexperiment –run-id myrunid –tag area=qna

2. Preparing an entry sheet

The entry script receives the data submitted to the web service, passes it to the model, and then returns the model’s response to the client. This script is written explicitly for the specific model and should thus understand the data the model expects. For instance, it includes loading the model (using a function called init ()) and running the model on input data (using a function called run()). 

3. Prepare an inference configuration.

A deployment configuration specifying the memory and cores to reserve for the web service is required to run the service’s details. For instance, the user needs 2 GB of memory, 2 CPU cores and the ability to autoscale. An example of this, 


    “computeType”: “local”,

    “port”: 32267


4. Deploy the model

After this step, the model is ready to be deployed.

 !az ml model deploy -n myservice -m bidaf_onnx:1 –overwrite –ic dummyinferenceconfig.json –dc deploymentconfig.json

!az ml service get-logs -n myservice

To call in the model and check that it is deployed successfully, the user can opt for a liveness request at

!curl -v http://localhost:32267

!curl -v -X POST -H “content-type:application/json” -d ‘{“query”: “What color is the fox”, “context”: “The quick brown fox jumped over the lazy dog.”}’’ http://localhost:32267/score

5. Define the entry script actually to load the model. 

Modify the entry script and then deploy it again. 

Deploy the service again:

!az ml model deploy -n myservice -m bidaf_onnx:1 –overwrite –ic inferenceconfig.json –dc deploymentconfig.json

!az ml service get-logs -n myservice

6. Choose a compute target

Choose a computer target from which to host the model. 

7. Re-deploy the model to cloud

Once the service has been tested locally, choose a remote computer target and deploy the model to the cloud. Its reusable pipelines allow for easy model training, data preparation, and deployment of the ML models. 

The model can also be deployed as a webservice in Azure Container Instance. To know more, click here

II. MLFlow 

MLFlow is an open-source platform that centralizes model stores to manage the MLops Lifecycle. It includes model lineage, model versioning, production to deployment transitions, and annotations.

Model Registry Database

SQLite is a C library that includes a disk-based database without a separate server process and allows access to the database using a nonstandard variant of the SQL query language. The user can prototype an application on SQLite and port the code to a more extensive database. 

The MLFlow Workflow consists of two types- UI and API workflow. 

UI Workflow

  1. Register model on the Artifacts page in the MLFlow section.
  2. Add a new model in the ‘‘Model name’’ field- the user can either choose from the existing models or provide a new name. 
  3. Once named, the Registered model section will show the model details. 
  4. All the versions of the models will be shown in their detailed sections. 
  5. The model stage can be changed from staging to production from the drop-down menu. 

API Workflow

To design API, the operations need to be defined in terms of: 

  1. Publish newly trained models.
  2. Publish a new version of a model.
  3. Update the deployment stage of a published model.
  4. Get metadata associated with a productionized model.

There are three programmatic ways to add a model to the registry.

  1. Using the mlflow.<model_flavor>.log_model() method.

from random import random, randint

from sklearn.ensemble import RandomForestRegressor

import mlflow

import mlflow.sklearn

with mlflow.start_run(run_name=”YOUR_RUN_NAME”) as run:

    params = {“n_estimators”: 5, “random_state”: 42}

    sk_learn_rfr = RandomForestRegressor(**params)

    # Log parameters and metrics using the MLflow APIs


    mlflow.log_param(“param_1”, randint(0, 100))

    mlflow.log_metrics({“metric_1”: random(), “metric_2”: random() + 1})

    # Log the sklearn model and register as version 1





2. The mlflow.register_model() method

result = mlflow.register_model(



3.  create_registered_model()

from mlflow.tracking import MlflowClient

client = MlflowClient()


What Do You Think?

Join Our Telegram Group. Be part of an engaging online community. Join Here.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top