AIM Banners_978 x 90

Adversarial Reprogramming: Exploring A New Paradigm of Neural Network Vulnerabilities

Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake. An adversarial attacker could target autonomous vehicles by using stickers or paint to create an adversarial stop sign that the vehicle would interpret as a ‘yield’ or other sign. A confused car on a busy day is a potential catastrophe packed in a 2000 pound metal box.  So far, the majority of adversarial attacks, the attacker designed few perturbations to produce an output specific to a given input. The attacks consisted of untargeted attacks that aim to degrade the performance of a model. And they did this without the need to produce a specific output. An attack against a classifier could be targeted a specific desired output class for
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Ram Sagar
Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed