MITB Banner

Top 12 Papers On Adversarial Learning At CVPR 2020

Security in data science practices has always been one of the crucial concerns among organisations. With the increase of using machine learning and deep learning models, researchers have been trying to make these models secure and robust in every way possible. Adversarial learning helps in improving the performance of machine learning systems. 

Below here we have listed down the top 12 research papers on adversarial learning presented at Computer Vision and Pattern Recognition (CVPR 2020) Conference.

(The list is in no particular order)

1| DaST: Data-Free Substitute Training for Adversarial Attacks

About: In this paper, the researchers proposed a Data-free Substitute Training method, also called as DaST, which can obtain substitute models for adversarial black-box attacks without any real data. To achieve this, DaST utilises specially designed generative adversarial networks (GANs) to train the substitute models. The experiments demonstrated that the substitute models produced by DaST have the capability to achieve competitive performance compared with the baseline models trained by the same train set.

Read the paper here.

2| Towards Verifying Robustness of Neural Networks Against A Family of Semantic Perturbations

About: In this paper, a team of researchers from IBM Research and others proposed a model-agnostic and generic robustness verification approach known as Semantify-NN. The approach against semantic perturbations for neural networks. This proposed approach features semantic perturbation layers, also known as SP-layers, to expand the verification power of current verification methods beyond p-norm bounded threat models. The researchers further demonstrated how the SP-layers can be implemented and refined for verification based on a diverse set of semantic attacks.

Read the paper here.

3| The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks

About: Researchers from UC Berkeley and others studied the model-inversion attacks, in which the access to a model is abused to infer information about the training data. In this paper, they focused on image data and proposed a simple yet effective attack method, termed the generative model-inversion (GMI) attack, which can invert DNNs and synthesise private training data with high fidelity. The end-to-end GMI attack algorithm is based on GANs and can reveal private training data of DNNs with high fidelity.

Read the paper here.

4| A Self-Supervised Approach for Adversarial Robustness

About: In this paper, the researchers combined the benefits of adversarial training and different input processing based defences approaches and proposed a self-supervised adversarial training mechanism in the input space. The approach can be deployed as a plug-and-play solution to protect a variety of vision systems including classification, segmentation and detection.

Read the paper here.

5| Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalisation

About: In this paper, the researchers identified Adversarial Feature Overfitting (AFO), which may cause poor generalisation, and showed that adversarial training can overshoot the optimal point in terms of robust generalisation, leading to AFO in a simple Gaussian model. They proposed an Adversarial Vertex mixup (AVmixup), which is a soft-labelled data augmentation approach for improving adversarially robust generalisation.

Read the paper here.

6| How Does Noise Help Robustness? Explanation and Exploration under the Neural SDE Framework

About: In this paper, the researchers of Google and UC Davis proposed a new continuous neural network framework called Neural Stochastic Differential Equation (Neural SDE), which naturally incorporates various commonly used regularisation mechanisms based on random noise injection. They further demonstrated that the Neural SDE network can achieve better generalisation than the Neural ODE and is more resistant to adversarial and non-adversarial input perturbations.

Read the paper here.

7| Unpaired Image Super-Resolution Using Pseudo-Supervision

About: In this paper, the researcher proposes an unpaired super-resolution (SR) method using a generative adversarial network that does not require a paired or aligned training dataset. The network consists of an unpaired kernel, noise correction network and a pseudo-paired SR network. With the help of diverse datasets, the researcher showed that the proposed method is superior to existing solutions to the unpaired SR problem. 

Read the paper here.

8| Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs

About: In this paper, the researchers introduced a benchmark technique for detecting backdoor attacks, also known as Trojan attacks on deep convolutional neural networks (CNNs). They introduced the concept of Universal Litmus Patterns (ULPs), which enables one to reveal backdoor attacks by feeding these universal patterns to the network and analysing the output (i.e., classifying the network as ‘clean’ or ‘corrupted’).

Read the paper here.

9| Robustness Guarantees for Deep Neural Networks on Videos

About: In this paper, the researchers from the University of Oxford, UK considered the robustness of deep neural networks on videos, which comprise both the spatial features of individual frames extracted by a convolutional neural network (CNN) and the temporal dynamics between adjacent frames captured by a recurrent neural network. To measure robustness, they studied the maximum safe radius problem, which computes the minimum distance from the optical flow sequence obtained from a given input to that of an adversarial example in the neighbourhood of the input.

Read the paper here.

10| Benchmarking Adversarial Robustness on Image Classification

About: In this paper, the researchers established a comprehensive and coherent benchmark to evaluate adversarial robustness on image classification tasks. This benchmark can provide a detailed understanding of the effects of the existing methods under different scenarios, with a hope to facilitate future research. 

Read the paper here.

11| What It Thinks Is Important Is Important: Robustness Transfers Through Input Gradients

About: In this paper, the researchers proposed a robustness transfer method that is both tasks- and architecture-agnostic with input gradient as the medium of transfer. The approach here is called input gradient adversarial matching (IGAM). They showed that the input gradients are an effective medium to transfer adversarial robustness across different tasks and even across different model architectures.

Read the paper here.

12| Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking

About: In this work, the researchers examined the insecurity of current best performing re-identification (ReID) models by proposing a learning-to-mis-rank formulation to perturb the ranking of the system output. They also developed a multi-stage network architecture to extract general and transferable features for the adversarial perturbations.

Read the paper here.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Ambika Choudhury

Ambika Choudhury

A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories