Advancements in the field of deep learning and computer vision are becoming newsworthy headlines each day. However, many researchers and scientists have also warned against its misuses. Earlier this year, we had discussed in detail some alarming instances where neural networks and deep learning had faced serious security threats.
Earlier this year in July, researchers from the University of Washington, University of Chicago and UC Berkley created a dataset which contains natural adversarial examples. The dataset is curated with 7,500 natural adversarial examples and is released in an ImageNet classifier test set known as ImageNet-A.
According to the researchers, this dataset serves as a new method to measure the robustness of a classifier. This dataset is mainly created to exploit deep flaws in current classifiers including their over-reliance on colour, texture and background cues. It is said to obtain around 2% accuracy of a machine vision system with an accuracy drop of approximately 90%.
So, adversarial examples are the instances with minute deviation in features which cause a machine learning model to predict false outcomes. There are several approaches by which one can create adversarial examples such as minimising the distance between the adversarial example and the instance which is to be manipulated, accessing to the gradients of the model, accessing to the prediction function, among others.
Adversarial examples are usually set in machine learning models to enable measuring worst-case model performance. The images in ImageNet-A are natural, unmodified, real-world examples which are collected online and are selected in order to cause a “model to make a mistake,” as with other synthetic adversarial examples. The examples here caused errors due to various reasons such as weather, occlusion, among others.
The above images shown are the natural adversarial examples from ImageNet-A where the text in black is the original class and the text in red is the predicted ones by machine vision systems. It has been clearly shown that the machine vision system predicts the images with less than 3 percent of the accuracy rate.
Home » How This Dataset With Confusing Images Can Help You Avoid Failure With Deep Learning Models
Designing The Dataset
The ImageNet-A is designed by collecting numerous images related to an ImageNet class. Then it is curated by separating the incorrectly classified examples from the correct ones in the ImageNet class. From the remaining incorrectly classified examples, the researchers manually select a subset of high-quality images. Thus, ImageNet-A consists of 200 classes which cover the broadest categories spanned by ImageNet-1K. Along with the ImageNet dataset, ImageNet-A also includes images from other online sites where the images are related to each of the 200 ImageNet classes
Researches with adversarial cases have been in practice for a few years now. Adversarial examples are being used by the researchers as a medium to fill the gap between the intentions of machine learning developers and the behaviour of the algorithms. Organisations are implementing techniques like machine vision systems in a number of domains right from autonomous driving to the health sector. This makes it very important for a model to predict the outcomes most accurately since there are cases around us such as deaths involving autonomous driving cars, 3D turtle recognised as a rifle, etc.
Ian Goodfellow from OpenAI had said that adversarial examples are a way of showing that modern machine learning algorithms can be broken in various surprising ways. The failures due to adversarial examples depict that a simple machine learning algorithm can act in a very different way.
You can read the full paper here.
Provide your comments below
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box. Contact: firstname.lastname@example.org