Now Reading
Robustness Of Multi-Exit Architectures Against Adversarial Attacks

Robustness Of Multi-Exit Architectures Against Adversarial Attacks


A deep neural network or DNN progressively gains the ability to learn complex representations as it goes deeper, enabling it to perform image classification and speech recognition tasks.

On the flip side, making DNNs deeper comes with increased inference-time computational demands. To remediate this, researchers have devised multi-exit architectures where deeper networks fix mistakes of shallow networks. For some samples, shallow networks are good enough.

Register for this Session>>

In multi-exit architectures, the stack of processing layers are interleaved with early output layers or ‘exits’. This allows the processing of test examples to be halted earlier. These architectures save computational time and effort by bypassing the remaining layers and making input-specific decisions. These architectures enable faster inferences and can be used for low-power IoT devices too.

However, the computational savings provided by this approach may not be robust against adversarial attacks. In particular, an adversary may increase the average inference time to slow down adaptive DNN. This kind of attack is similar to denial-of-service attacks.

DeepSloth

Recently, a group of researchers from the University of Maryland and Alexandru Ioan Cuza University studied the robustness of multi-exit architectures against adversarial attacks; and discovered DeepSloth. The team found that examples crafted by previous evasion attacks fail to bypass the model’s early exits. This results in the modification of the objective function. The team found the DeepSloth attack reduces the efficacy of multi-exit architectures by up to 100 percent, meaning the attack can render all the early exits ineffective. Even in the most constrained cases, the attack could reduce the efficacy by 5-45 percent.

The DeepSloth was also found to be effective in a few black-box scenarios where the attacker had limited knowledge of the victim network. The standard defence against the adversarial samples was found to be inadequate against the slowdown.

DeepSloth poses a threat to many practical applications, imposing strict limitations on the responsiveness and resource usage of DNN. It forces the victim to do more work than the adversary by either amplifying the latency needed for processing the sample or by crafting reusable perturbations.

DeepSloth was also compared to previous adversarial attacks. The team found that such attacks did add imperceptible perturbations to the victim network’s time samples and forced misclassification. In many cases, although they hurt the accuracy of the system, they could not cause any significant decrease in efficacy. In fact, in a few cases, it was found that these attacks actually increased the efficacy. To know why DeepSloth attacks differ from prior attacks, the team visualised a model’s hidden block features on the original and perturbed time-test samples. It was found that DeepSloth disrupted the original features slightly more than the other attacks.

See Also

Wrapping up

DeepSloth is the equivalent of denial of service attack on a neural network, a first of its kind. DeepSloth takes up its resources and prevents it from using its full capacity when models are served directly from a server. In cases where the multi-exit architecture is split between edge and cloud, this attack could force the device to send all its data to the server, causing much damage. For example, in the case of IoT deployments, DeepSloth can spike the latency up to five times.

“Adversarial training, a standard countermeasure for adversarial perturbations, is not effective against DeepSloth. Our analysis suggests that slowdown attacks are a realistic, yet under-appreciated, threat against adaptive models,” the team said. 

The original paper was presented at ICLR 2021 and can be read here.


Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top