21st-may-banner design

YOLO Vulnerabilities Underscore Necessity for Open Source Auditing

A cybersecurity firm said that they have found 11 security vulnerabilities in Yolo v7.

Share

Listen to this story

A security audit of YOLO (You Only Look Once) object detection algorithms revealed major security vulnerabilities. Trail of Bits– a cybersecurity firm– claimed to have found 11 vulnerabilities in Yolo v7. If exploited, it could lead to compromised object detection, privacy breaches and safety risks, subsequently leading to far greater consequences.

Tesla AI reportedly uses the Yolo algorithm along with its sensors for detecting objects and people around the vehicle. Similarly, Roboflow, a computer vision platform that provides tools and services to simplify the process of building computer vision models also utilises the Yolo model for object detection and image segmentation tasks.

Introduced first in 2015 by Joseph Redmon et al via a paper titled, ‘You Only Look Once: Unified, Real-Time Object Detection’, YOLO’s object detection algorithms find extensive usage in drones, autonomous vehicles, robotics, and numerous manufacturing companies, making them one of the most widely adopted algorithms in these industries.

Even if not directly employing these algorithms, numerous entities are leveraging their open-source nature to build upon them. Over the years, the model has gone through several iterations and earlier this year, Yolo v8 was released.

Security vulnerabilities

Key security concerns, highlighted by the Trail of Bits report, include the absence of defensive coding practices, a lack of unit tests or a testing framework, and inadequate validation and sanitisation of both user and external data inputs in the YOLOv7 codebase.

Trail of Bits, which is based in New York, mentions that the YOLOv7 codebase is not suitable for security-critical applications or applications that require high availability, such as autonomous vehicles.

This is because the lack of defensive coding practices is evident in the codebase, as it is not written or designed with a focus on security. “If an attacker is able to control or manipulate various inputs to the system, such as model files, data files, or configuration files, they could perform a denial-of-service attack with low effort,” the report said.

While they do not recommend using the codebase for mission-critical applications or applications that require high availability, the algorithm is often used commercially and in mission-critical applications.

Addressing Open source vulnerabilities

In the blog post, the cybersecurity company notes that Yolo is a product of academic work generally not meant for commercial use and does not have appropriate cyber hygiene.

However, since the algorithm has been widely adopted for commercial use, Trail of Bits has also suggested remedies to overcome the vulnerabilities. “We only intended to bring to light the risks in using such prototypes without further security scrutiny,” they said.

Meredith Whittaker, president of the Signal Foundation, commended the work. “AI systems, like all networked computation, rely on core open components that are often in disrepair, maintained by volunteers if at all, even as trillion dollar companies built on top of them,” she tweeted.

“This company does really pragmatic security reviews of really important open source infrastructure,” another X user said.

Security audits of open-source projects are welcoming

In the present day, generative AI has become the hottest technology in the tech industry. However, the revolution is equally led by the open-source community. LLaMA large language model, an open-source model released by Meta, is among the most popular LLMs to date.

Notably, the Technology Innovation Institute (TII) in Abu Dhabi, United Arab Emirates released Falcon earlier this year, which topped the Hugging Face OpenLLM Leaderboard.

However since open-source projects often rely on a community of contributors and users, security audits help establish and maintain trust within this community. Hence, regular audits of numerous generative AI projects could also prove to be highly beneficial.

Moreover, such audits should also not be limited to open-source projects. For example, earlier this year, an engineer at cybersecurity firm Tenable found a major red flag in the Microsoft Azure platform.

Tenable CEO said that most of these Microsoft Azure users had no clue about the vulnerability, and hence, couldn’t make any informed decision about compensating controls and other risk-mitigating actions.

Share
Picture of Pritam Bordoloi

Pritam Bordoloi

I have a keen interest in creative writing and artificial intelligence. As a journalist, I deep dive into the world of technology and analyse how it’s restructuring business models and reshaping society.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe

Subscribe to our Youtube channel and see how AI ecosystem works.

There must be a reason why +150K people have chosen to follow us on Linkedin. 😉

Stay in the know with our Linkedin page. Follow us and never miss an update on AI!