# 5 Object Detection Evaluation Metrics That Data Scientists Should Know

In computer vision, object detection is one of the powerful algorithms, which helps in the classification and localization of the object. Object detection is more challenging because it needs to draw a bounding box around each object in the image. While going through research papers you may find these terms AP, IOU, mAP, these are nothing but Object detection metrics that help in finding good models. In this article, we will demonstrate how to get insights and a clear understanding of each from these metrics.

• Introduction to the precision and recall
• Intersection over Union(IOU)
• Average Precision(AP)
• Mean Average Precision(mAP)
• Variations among mAP

### Introduction to Precision and Recall

Precision – It is used to measure the correct predictions.

Recall – it is used to calculate the true predictions from all correctly predicted data.

## Intersection Over Union(IOU)

IOU is a metric that finds the difference between ground truth annotations and predicted bounding boxes. This metric is used in most state of art object detection algorithms. In object detection, the model predicts multiple bounding boxes for each object, and based on the confidence scores of each bounding box it removes unnecessary boxes based on its threshold value. We need to declare the threshold value based on our requirements.

IOU =   Area of union /  area of intersection

`def IOU(box1, box2):`

x1, y1, w1, h1 = box1

x2, y2, w2, h2 = box2

`w_intersection = min(x1 + w1, x2 + w2) - max(x1, x2)`

`h_intersection = min(y1 + h1, y2 + h2) - max(y1, y2)`

` if w_intersection <= 0 or h_intersection <= 0: `

` return 0`

`I = w_intersection * h_intersection`

`U = w1 * h1 + w2 * h2 - I `

`return I / U`

`iou = [IOU(y_test[i], y_pred[i]) for i in range(len(x_test))`]

### Ground truth image

The model predicted image without using IoU looks like

After setting some threshold value it removes the unnecessary boxes based on the confidence scores.

## Average Precision(AP)

To evaluate the detection commonly we use precision-recall curve but average precision gives the numerical values it is easy to compare the performance with other models. Based on the precision-recall curve AP it summarises the weighted mean of precisions for each threshold with the increase in recall. Average precision is calculated for each object.

From the above formula, P refers to precision and R refers to Recall suffix n denotes the different threshold values.

`import numpy as np`

`from sklearn.metrics import average_precision_score`

`ground_truth = np.array([0, 0, 1, 1])`

`model_predicted_confidences = np.array([0.1, 0.4, 0.35, 0.8])`

`average_precision_score(ground_truth, model_predicted_confidences)`

In the above output, we achieved 0.83333 average precision based on the confidence scores.

## Mean Average Precision(mAP)

Mean average precision is an extension of Average precision. In Average precision, we only calculate individual objects but in mAP, it gives the precision for the entire model. To find the percentage correct predictions in the model we are using mAP.

Here N denoted the number of objects.

`mAP= [0.83,0.66,0.99,0.78,0.60]`

`a=len(mAP)`

`b=sum(mAP)`

`c=a/b`

`print(c)`

Each object has its individual average precision values, we are adding all these values to find Mean Average precision.

## Variations Among  mAP

In most of the research papers, these metrics will have extensions like mAP iou = 0.5, mAP iou = 0.75, mAP small, medium, large. In this, we will clearly demonstrate what it actually means.

• mAP iou=0.5 represents the model has used 0.5 threshold value to remove unnecessary bounding boxes, it is the standard threshold value for most of the models.
• mAP iou=0.75 represents the model has used 0.75 threshold value, By using this we can get accurate results by removing bounding boxes with less than 25% of the intersection with ground truth image.
• mAP small represents the model has given mAP score based on smaller objects in the data.
• mAP large represents the model has given mAP score based on larger objects in the data.

## Conclusion

In the above article, we demonstrated what are the basic object detection metrics by comparing the predicted output and ground truth values and their variations,  These metrics are used by most of the state of art models to find the prediction in the model.

AI enthusiast, Currently working with Analytics India Magazine. I have experience of working with Machine learning, Deep learning real-time problems, Neural networks, structuring and machine learning projects. I am a Computer Vision researcher and I am Interested in solving real-time computer vision problems.

## Our Upcoming Events

### Telegram group

Discover special offers, top stories, upcoming events, and more.

### Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

### Unity is Not as United as It Sounds

Unity’s new pricing model has caused an exodus to Unreal and Godot. Developers give up on Unity, accusing it of being anti-consumer

### KPI Partners: Accelerating Business Growth with Modernization and Monetization of Data Assets

KPI Partners, a leader in digital transformation, is dedicated to accelerate business profitability by providing customized data analytics and digital transformation services and solutions to address the unique needs of each client.

### How Generative AI is Revolutionising Data Science Tools

Einblick Prompt enables users to create complete data workflows using natural language, accelerating various stages of data science and analytics. Einblick has effectively combined the capabilities of a Jupyter notebook with the user-friendliness of ChatGPT.

### Google is Officially Killing the Internet with AI

While Google has long preached ‘helpful content written by people, for people, in search results’, its recent actions suggest otherwise

### Why Neeva Pivoted to Enterprise AI with Snowflake

A part of the founder’s job is to see beyond the corner and get a feel of what is coming

### Larry Lulls Everyone into Generative AI

“Does that mean we are going to have massive layoffs at Oracle? No, we are too ambitious for that,” said Larry Ellison

### [Report] LLM Economics – A Guide to Generative AI Implementation Cost

This will serve as a vital tool helping stakeholders to assess the potential costs and benefits associated with different implementation strategies, whether it be through API or open-source pathways.

### OpenAI Banks on Red Team to Win the Moral Battle

OpenAI’s latest announcement on inviting people for forming a Red Teaming Network may not be a completely new initiative. Then, why now?

### The Proportional Rise of Inference and CPUs

As inference costs scale with the number of users, the impending shift in AI landscape has forced NVIDIA, AMD, Intel to pay heed to it

### Intel Goes All in on AI

Pat Gelsinger said, there are three types of chip manufacturers, “you’re big, you’re niche or you’re dead”