With a massive advent of machine learning in critical areas like healthcare, BFSI, defence, etc., it is making a significant impact on human lives. Although there is a growing interest of businesses to include machine learning, there always has been a great concern of security of their ML systems. To resolve such an issue 12 tech companies including Microsoft, NVIDIA, Bosch, IBM etc. have partnered with an American not-for-profit organisation — MITRE — to create an Adversarial ML Threat Matrix.
This adversarial machine learning threat matrix is an industry-focused open framework that has been designed to empower security analysts to detect, respond to, and remediate threats against ML systems.
According to Microsoft, the threat matrix has been designed to respond to the growing number of attacks around the world. As a matter of fact, the company has surveyed 28 businesses that noted that “most industry practitioners have yet to come to terms with adversarial machine learning.” Along with that, out of those 28 companies, 25 of them have reported that they don’t have the right tools in place to secure their machine learning systems and are looking for guidance.
Microsoft further added —“… preparation is not just limited to smaller organisations.” The company even spoke to Fortune 500 companies, governments, non-profits, and small and mid-sized organisations.
What Is Adversarial ML Threat Matrix
This threat matrix has been developed in partnership with MITRE, as the company believes that the first step is to “have a framework that systematically organises the techniques employed by malicious adversaries in subverting ML systems.”
“We hope that the security community can use the tabulated tactics and techniques to bolster their monitoring strategies around their organisation’s mission-critical ML systems,” stated in the blog post.
The tool has been created specifically for security analysts. The Adversarial ML Threat Matrix will put the attacks on ML systems in a framework that security analysts can address the new and upcoming threats. In fact, the threat matrix is structured like the ATT&CK framework, which would be easy for security analysts to address the ML system threats.
Further, the company has been seeding this framework with a curated set of vulnerabilities and adversary behaviours that Microsoft and MITRE have vetted to be effective against production ML systems. With this, the security analysts can focus on real threats to the ML systems. “We also incorporated learnings from Microsoft’s vast experience in this space into the framework: for instance, we found that model stealing is not the end goal of the attacker but in fact leads to more insidious model evasion” stated in the blog post.
The company also found that usually, attackers use a combination of “traditional techniques” like phishing and lateral movement alongside adversarial ML techniques.
Considering the adversarial machine learning has been a critical area of research in academia, this threat matrix — Adversarial ML Threat Matrix — has been the first attempt at collecting adversary techniques against machine learning systems. As the threat landscape evolves, this framework will be advanced with input from the security and machine learning community.
Learn more about Adversarial ML Threat Matrix here.