MITB Banner

The  “Test of Time” research that advanced our interpretation of Adversarial ML

Contemporary R&D progress shows that researchers have come up with ‘reactive’ and ‘proactive’ measures to secure ML algorithms.
Share
Listen to this story

The 39th International Conference on Machine Learning is currently being held at the Baltimore Convention Centre in Maryland, USA and their ‘Test of Time’ award was awarded to a research work published in 2012 titled, ‘Poisoning attacks against Support Vector Machines’. 

This research work was undertaken to demonstrate that not only can an intelligent adversary predict a change in the decision-making function of a Support Vector Machine (SVM) due to malicious input but can also use this prediction to construct malicious data. 

Conducted by Battista Biggio, Department of Electrical and Electronic Engineering, University of Cagliari along with Blaine Nelson and Pavel Laskov from the Wilhelm Schickard Institute of Computer Science, University of Tubingen—this is one of the earliest research works ever conducted on the poisoning attacks against SVMs.

(Image source: Twitter)

ICML’s ‘Test of Time’ is awarded to research works presented ten years from the current year in recognition of the impact that the works have caused since their publication to the current research and practice in the field of machine learning.

The research 

The research work successfully demonstrates how an intelligent adversary can, to some extent, predict the change of a Support Vector Machine’s (SVM) ‘decision function’ due to malicious input and use this ability to then construct malicious data.

SVMs are supervised machine learning algorithms that can be used for the classification and regression analysis of data groups and can even detect outliers. They are capable of both linear classification and non-linear classification. For non-linear classification, SVMs use a kernel trick.

In the course of the study, the research team made certain assumptions about the attacker’s familiarity with the learning algorithm and their access to underlying data distribution and the training data that the learner may be using. However, this may not be the case in real-world situations where the attacker is more likely to use a surrogate training set drawn from the same distribution. 

Based on these assumptions, the researchers were able to demonstrate a technique that any attacker can deploy to create a data point that can dramatically lower classification accuracy in SVMs. 

To simulate an attack on the SVM, the researchers used a technique called ‘gradient ascent strategy’, where the gradient is computed based on the properties of the optimal solution of the SVM training problem. 

Since it is possible for an attacker to manipulate the optimal SVM solution by interjecting specially crafted attack points, the research demonstrates that it is possible to find such attack points while retaining an optimal solution of the SVM training problem. In addition, it illustrates that the gradient ascent procedure significantly increases the classifier’s test error.

Significance of the research 

When this research was published in 2012, contemporary research works related to poisoning attacks were largely focused on detecting simple anomalies. 

This work, however, proposed a breakthrough that optimised the impact of data-driven attacks against kernel-based learning algorithms and emphasised the need to consider resistance against adversarial training data as an important factor in the design of learning algorithms.

The research presented in the paper inspired several subsequent works in the space of adversarial machine learning such as adversarial examples for deep neural networks, various attacks on machine learning models and machine learning security. 

It is noteworthy that the research in this domain has evolved since then—from focusing on the security of non-deep learning algorithms to understanding the security properties of deep learning algorithms in the context of computer vision and cybersecurity tasks. 

Contemporary R&D progress shows that researchers have come up with ‘reactive’ and ‘proactive’ measures to secure ML algorithms. While reactive measures are taken to counter past attacks, proactive measures are preventive in nature. 

Timely detection of novel attacks, frequent classifier retraining and verifying the consistency of classifier decisions against training data are considered reactive measures.

Security-by-design defences against ‘white-box attacks’, where the attacker has perfect knowledge of the attacked system and security-by-obscurity against ‘black-box attacks’, where the attacker has no information about the structure or parameter of the system are considered proactive measures.

The importance of employing such measures in present-day research highlights the significance of this paper as the pivotal step in the direction to secure ML algorithms.

By the same token, industry leaders too became increasingly aware of the different kinds of adversarial attacks like poisoning, model stealing and model inversion and recognised that these attacks can inflict significant damage to businesses by breaching data privacy and compromising intellectual property. 

Consequently, institutional vigilance about adversarial machine learning is prioritised. Tech giants like Microsoft, Google and IBM have explicitly committed to securing their traditional software systems against such attacks. 

Many organisations are however already ahead of the curve in systematically securing their ML assets. Organisations like ‘ISO’ are coming up with rubrics to assess the security of ML systems across industries. 

Governments are also signalling industries to build secure ML systems. For instance, the European Union released a checklist to assess the trustworthiness of ML systems.

Amid these concerns, machine learning techniques help detect underlying patterns in large datasets, adapt to new behaviours and aid in decision-making processes, and have thus gained significant momentum in the mainstream discourse. 

ML techniques are routinely used to solve big data challenges such as various security-related issues like detecting spam, frauds, worms or other malicious intrusions. 

Identifying poisoning as an attack on ML algorithms and the disastrous implications it may have for many businesses and industries like the medical sector, aviation sector, road safety or cyber security concretised the contribution of this paper as one of the first research works that paved the way for adversarial machine learning research. 

The authors challenged themselves with the task of finding if such attacks were possible against complex classifiers. Their objective was to identify an optimal attack point that maximised the classification error.  

In their work, the research team not only paved the way for adversarial machine learning research, a technique that tricks ML models by providing deceptive data, but also laid the foundation for any research that may help defend against growing threat in AI and ML. 

PS: The story was written using a keyboard.
Share
Picture of Zinnia Banerjee

Zinnia Banerjee

Zinnia loves writing and it is this love that has brought her to the field of tech journalism.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India