YOLO Vulnerabilities Underscore Necessity for Open Source Auditing

A cybersecurity firm said that they have found 11 security vulnerabilities in Yolo v7.
Listen to this story

A security audit of YOLO (You Only Look Once) object detection algorithms revealed major security vulnerabilities. Trail of Bits– a cybersecurity firm– claimed to have found 11 vulnerabilities in Yolo v7. If exploited, it could lead to compromised object detection, privacy breaches and safety risks, subsequently leading to far greater consequences.

Tesla AI reportedly uses the Yolo algorithm along with its sensors for detecting objects and people around the vehicle. Similarly, Roboflow, a computer vision platform that provides tools and services to simplify the process of building computer vision models also utilises the Yolo model for object detection and image segmentation tasks.

Introduced first in 2015 by Joseph Redmon et al via a paper titled, ‘You Only Look Once: Unified, Real-Time Object Detection’, YOLO’s object detection algorithms find extensive usage in drones, autonomous vehicles, robotics, and numerous manufacturing companies, making them one of the most widely adopted algorithms in these industries.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Even if not directly employing these algorithms, numerous entities are leveraging their open-source nature to build upon them. Over the years, the model has gone through several iterations and earlier this year, Yolo v8 was released.

Security vulnerabilities

Key security concerns, highlighted by the Trail of Bits report, include the absence of defensive coding practices, a lack of unit tests or a testing framework, and inadequate validation and sanitisation of both user and external data inputs in the YOLOv7 codebase.

Trail of Bits, which is based in New York, mentions that the YOLOv7 codebase is not suitable for security-critical applications or applications that require high availability, such as autonomous vehicles.

This is because the lack of defensive coding practices is evident in the codebase, as it is not written or designed with a focus on security. “If an attacker is able to control or manipulate various inputs to the system, such as model files, data files, or configuration files, they could perform a denial-of-service attack with low effort,” the report said.

While they do not recommend using the codebase for mission-critical applications or applications that require high availability, the algorithm is often used commercially and in mission-critical applications.

Addressing Open source vulnerabilities

In the blog post, the cybersecurity company notes that Yolo is a product of academic work generally not meant for commercial use and does not have appropriate cyber hygiene.

However, since the algorithm has been widely adopted for commercial use, Trail of Bits has also suggested remedies to overcome the vulnerabilities. “We only intended to bring to light the risks in using such prototypes without further security scrutiny,” they said.

Meredith Whittaker, president of the Signal Foundation, commended the work. “AI systems, like all networked computation, rely on core open components that are often in disrepair, maintained by volunteers if at all, even as trillion dollar companies built on top of them,” she tweeted.

“This company does really pragmatic security reviews of really important open source infrastructure,” another X user said.

Security audits of open-source projects are welcoming

In the present day, generative AI has become the hottest technology in the tech industry. However, the revolution is equally led by the open-source community. LLaMA large language model, an open-source model released by Meta, is among the most popular LLMs to date.

Notably, the Technology Innovation Institute (TII) in Abu Dhabi, United Arab Emirates released Falcon earlier this year, which topped the Hugging Face OpenLLM Leaderboard.

However since open-source projects often rely on a community of contributors and users, security audits help establish and maintain trust within this community. Hence, regular audits of numerous generative AI projects could also prove to be highly beneficial.

Moreover, such audits should also not be limited to open-source projects. For example, earlier this year, an engineer at cybersecurity firm Tenable found a major red flag in the Microsoft Azure platform.

Tenable CEO said that most of these Microsoft Azure users had no clue about the vulnerability, and hence, couldn’t make any informed decision about compensating controls and other risk-mitigating actions.

Pritam Bordoloi
I have a keen interest in creative writing and artificial intelligence. As a journalist, I deep dive into the world of technology and analyse how it’s restructuring business models and reshaping society.

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR