A self-driving car should be accurate \u2014 there is no room for second-guessing. A self-driving car\u2019s accuracy improves drastically if it has been trained on data that has been annotated with parameters like colours, shapes, sizes, signs and angles.\r\n\r\nThe question here is where can one get that kind of data?\u00a0\r\n\r\nToday, data labelling has become an industry of its own. Developing nations like India have their own data labellers operating out of remote places with minimal education. It is a common notion that more labelled data leads to robust machine learning models.\u00a0\r\n\r\nHowever, that\u2019s not always the case. Real-time data comes with its own set of uncertainties and there is the problem of noisy data resulting due to unhealthy data collection.\u00a0\r\n\r\nSo, the reliability of a machine learning model shouldn\u2019t just stop at assessing robustness but also building a diverse toolbox for understanding machine learning models, including visualisation, disentanglement of relevant features, and measuring extrapolation to different datasets or to the long tail of natural but unusual inputs to get a clearer picture.\r\n\r\nThe researchers also have been trying to find visual corruptions such as (non-adversarial) fog, blur, or pixelation to be rich with solutions to achieve adversarial robustness.\u00a0\r\n\r\nFor example, fog or blur effects on images have emerged as another avenue for measuring the robustness of computer vision models. The robustness to such common corruption is considered to be linked to adversarial robustness and proposes corruption robustness as an easily computed indicator of adversarial robustness.\r\n\r\nDespite the promise of adversarial training, its reliance on large numbers of labeled examples has presented a major challenge towards developing robust classifiers.\u00a0\r\n\r\nIn order to assess the importance of annotated data for training, the researchers at DeepMind propose two simple UAT approaches, tested on two standard image classification benchmarks.\u00a0\r\nWhy Generalise Adversities\r\nOne of the most successful approaches for obtaining classifiers that are adversarially robust is adversarial training. A central challenge for adversarial training has been the difficulty of adversarial generalisation. Previous works have argued that adversarial generalisation may simply require more data than natural generalisation. In this paper, researchers at DeepMind pose a simple question of if the labeled data necessary, or is unsupervised data sufficient?\r\n\r\nTo test this, they have formalised two approaches--- Unsupervised Adversarial Training(UAT) with online targets and one with fixed targets.\r\n\r\nAs per the experiment, the CIFAR-10 training set was first divided into halves, where the first 20,000 examples are used for training the base classifier and the latter 20,000 are used to train a UAT model. Of the latter 20,000, 4,000 examples were treated as labeled, and the remaining 16,000 as unlabeled.\u00a0\r\n\r\nThese experiments reveal that one can reach near state-of-the-art adversarial robustness with as few as 4,000 labels for CIFAR-10 (10 times less than the original dataset) and as few as 1,000 labels for SVHN (100 times less than the original dataset). The authors also demonstrate that their method can be applied to uncurated data obtained from simple web queries.\u00a0\r\n\r\nThis approach improves the state-of-the-art on CIFAR-10 by 4% against the strongest known attack. These findings open a new avenue for improving adversarial robustness using unlabeled data.\r\nKey Takeaways\r\nThis work:\r\n\r\n \tAddresses more realistic case where unlabeled data is also uncurated, therefore opening a new avenue for improving adversarial training.\r\n \tPosits that unlabeled data can be a competitive alternative to labelled data for training adversarially robust models.\r\n \tTheoretically, shows that in a simple statistical setting, the sample complexity for learning an adversarially robust model from unlabeled data matches the fully supervised case.\r\n\r\nSince increasing robustness against one distortion type can decrease robustness against others, measuring performance on different distortions is important to avoid overfitting to a specific type, especially when a defence is constructed with adversarial training is proving to be crucial for the future of machine learning reliability.