MITB Banner

Top 15 Papers From Google AI Research Accepted By NeurIPS 2020

For this year’s annual conference on Neural Information Processing Systems, NeurIPS 2020, the research paper submission has reached 38% more than last year. This means that there are a total of 1,903 papers accepted, compared to 1,428 last year. 

This year, the committee has accepted more than 40 research papers submitted by Google researchers. Below here, we have listed the top fifteen AI research papers, in no particular order, from Google AI Research that have been accepted at the NeurIPS 2020 conference.

FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence

About: In this paper, the researchers at Google demonstrated the power of a simple combination of two common SSL methods, such as consistency regularisation and pseudo-labelling.

The FixMatch algorithm generates pseudo-labels by using the predictions on weakly-augmented unlabelled images and then is trained to predict the pseudo-label when fed a strongly augmented version of the same image. The FixMatch model achieves state-of-the-art performance across a number of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10.

Read the paper here.

Supervised Contrastive Learning

About: In this paper, the researchers proposed a novel training methodology that consistently outperforms cross-entropy on supervised learning tasks across different architectures and data augmentations. They modified the batch contrastive loss, which has recently been shown to be very effective at learning powerful representations in the self-supervised setting. On both

ResNet-50 and ResNet-200, the methodology outperformed cross-entropy by over 1%, setting a new state of the art number of 78.8% among methods that use AutoAugment data augmentation.

Read the paper here.

Unsupervised Data Augmentation for Consistency Training

About: In this work, the researchers investigated the role of noise injection in consistency training. They substituted the traditional noise injection methods with high-quality data augmentation methods in order to improve consistency training. To emphasise the use of better data augmentation in consistency training, the researchers named the method as Unsupervised Data Augmentation or UDA.

Read the paper here.

What Makes for Good Views for Contrastive Representation Learning?

About: Contrastive learning between multiple views of the data has recently achieved state-of-the-art performance in the field of self-supervised representation learning. In this paper, the researchers used empirical analysis to understand the importance of view selection and reduced mutual information (MI) between views while keeping task-relevant information intact.

They also considered data augmentation as a way to reduce MI and showed that increasing data augmentation indeed leads to decreasing MI and improves downstream classification accuracy.  

Read the paper here.

Your GAN is Secretly an Energy-based Model, and You Should Use Discriminator Driven Latent Sampling

About: In this paper, the researchers showed that a discriminator of a Generative Adversarial Network (GAN) could enable better modelling of the data distribution with Discriminator Driven Latent Sampling (DDLS). The motive behind this model is that learning a generative model to make a structured generative prediction is usually more complex than learning a classifier.

Read the paper here.

Munchausen Reinforcement Learning

About: In this work, the researchers presented a simple extension to RL algorithms, known as Munchausen RL. This method augments the immediate rewards by the scaled logarithm of the policy computed by an RL agent. The core contribution of this work is to propose a new RL algorithm that surpasses state-of-the-art results on a challenging discrete actions environment.

Read the paper here.

What Do Neural Networks Learn When Trained With Random Labels?

About: In this paper, the researchers showed analytically an alignment between the principal components of network parameters and data for convolutional and fully connected networks that takes place when training with random labels.

They further studied the alignment effect by investigating neural networks that are pre-trained on randomly labelled image data and subsequently fine-tuned on disjoint datasets with random or real labels.

Read the paper here.

What is being transferred in transfer learning?

About: Despite ample adaptation of transfer learning in various deep learning applications, it is hard to understand what enables a successful transfer and which part of the network is responsible for that. In this paper, the researchers provided new tools and analyses to address these fundamental queries.

Read the paper here.

Big Bird: Bert for Longer Sequences

About: Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. In this paper, the researchers proposed Big BERT, which is a sparse attention mechanism that reduces this quadratic dependency to linear.

They showed that Big BERT is a universal approximator of sequence functions and is Turing complete, thereby preserving the properties of the quadratic, full attention model.

Read the paper here.

Finite Versus Infinite Neural Networks: an Empirical Study

About: In this paper, the researchers performed an empirical study of the correspondence between wide neural networks and kernel methods. By doing so, they resolved a variety of open questions related to the study of infinitely wide neural networks.

The experimental results include kernel methods outperform fully-connected finite-width networks, but underperform convolutional finite width networks and other such.

Read the paper here.

Interpretable Sequence Learning for Covid-19 Forecasting

About: In this research, the researchers proposed an approach that combines machine learning into compartmental disease modelling in order to predict the progression of COVID-19.

According to them, the model explicitly depicts how different compartments evolve, and it utilises the interpretable encoders to incorporate covariates as well as improve performance.

Read the paper here.

On the Training Dynamics of Deep Networks with L2 Regularisation

About: In this paper, the researchers studied the role of L2 regularisation in deep learning, and uncover simple relations between the performance of the model, the L2 coefficient, the learning rate, and the number of training steps. These empirical relations hold when the network is over-parameterised. They can be used to predict the optimal regularisation parameter of a given model.

Read the paper here.

Leverage the Average: An Analysis of KL Regularisation in Reinforcement Learning

About: In this paper, the researchers provided an explanation of the effect of regularisation in RL. The study is conducted through the lens of regularised ADP, a framework that encompasses a number of recent and successful approaches making use of regularisation.

Read the paper here.

Sliding Window Algorithms for k-Clustering Problems

About: In this paper, the researchers presented the first algorithms for the k-clustering problem on sliding windows with space linear in k. The sliding window model of computation captures scenarios in which data is arriving continuously. The goal of this project is to design algorithms that update the solution efficiently with each arrival rather than recomputing it from scratch.

Read the paper here.

Rethinking Pre-training and Self-training

About: In this research, the researchers investigated self-training as another method to utilise additional data on the same setup and contrast it against ImageNet pre-training. The study revealed the generality and flexibility of self-training with three additional insights such as stronger data augmentation and more labelled data further diminish the value of pre-training, among others.

Read the paper here.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Ambika Choudhury

Ambika Choudhury

A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories