Outstanding Papers Awarded At NeurIPS 2019

The Neural Information Processing Systems (NeurIPS) conference is held every year in the month of December. This year, the 32nd edition was held in Vancouver, Canada. The purpose behind this conference is to foster the exchange of research on neural information processing systems in their biological, technological, mathematical, and theoretical aspects. 

The conference unveiled the NeurIPS outstanding paper awards for this year. These papers are the most notable accepted papers at the conference. The committee picked up the papers based on some specific criteria from the set of papers which had been selected for verbal presentation. The criteria are: 

  • Potential to endure
  • Insight
  • Creativity
  • Revolutionary
  • Rigour
  • Elegance
  • Reproducible
  • Scientific

This time, the committee also included an additional award called Outstanding New Directions Paper Award in order to highlight work that distinguished itself in setting a novel avenue for future research.

In this article, we list down the outstanding papers awarded at NeurIPS 2019.

1| Distribution-Independent PAC Learning of Halfspaces with Massart Noise

Category: Outstanding Paper Award

About: This paper studies the problem of distribution-independent PAC learning of halfspaces or Linear Threshold Functions (LTFs) for binary classification in the presence of Massart noise. The main contribution of this paper is the first non-trivial learning algorithm for the class of halfspaces (or even disjunctions) in the distribution-free PAC model with Massart noise. 

Read the paper here.

2| Uniform Convergence May Be Unable To Explain Generalization In Deep Learning

Category: Outstanding New Directions Paper Award 

About: This paper presents examples of overparameterized linear classifiers and neural networks trained by gradient descent (GD) where uniform convergence probably cannot “explain generalisation.” With this research, the researchers tried to understand the goal of a small generalisation bound which shows appropriate dependence on the sample size, width, depth, label noise, and batch size. 

Read the paper here.

3| Nonparametric Density Estimation & Convergence Rates for GANs under Besov IPM Losses

Category: Honorable Mention Outstanding Paper Award

About: In this paper, the researchers study the problem of estimating a non-parametric probability density under a large family of losses called Besov IPMs. The paper also shows that the linear distribution estimates, such as the empirical distribution or kernel density estimator, often fail to converge at the optimal rate. Furthermore, the researchers also showed that GANs can strictly outperform the best linear estimator.

Read the paper here.

4| Fast And Accurate Least-Mean-Squares Solvers

Category: Honorable Mention Outstanding Paper Award

About: Least-mean squares (LMS) solvers such as Linear / Ridge / Lasso-Regression, SVD and Elastic-Net not only solve fundamental machine learning problems but are also the building blocks in a variety of other methods, such as decision trees and matrix factorizations. Keeping this in mind, the researchers presented a novel framework which shows how to reduce the computational complexity of Least Mean-Square solvers by one or two orders of magnitude, with no precision loss and improved numerical stability. 

Read the paper here.

5| Putting An End to End-to-End: Gradient-Isolated Learning Of Representations

Category: Honorable Mention Outstanding New Directions Paper Award

About: In this paper, the researchers proposed a novel deep learning method for local self-supervised representation learning which does not require labels nor end-to-end backpropagation but exploits the natural order in data instead. The research is done by splitting a deep neural network into a stack of gradient-isolated modules where each module is trained to maximally preserve the information of its inputs using the InfoNCE bound.

Read the paper here.

6| Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations

Category: Honorable Mention Outstanding New Directions Paper Award

About: The researchers at Stanford University proposed Scene Representation Networks (SRNs) which is a continuous, 3D structure-aware scene representation that encodes both geometry and appearance. SRNs represent scenes as continuous functions which has the ability to map world coordinates to a feature representation of local scene properties. The potential of SRNs is further demonstrated by evaluating them for novel view synthesis, few-shot reconstruction, joint shape, and appearance interpolation, and unsupervised discovery of a non-rigid face mode. 

Read the paper here.

More Great AIM Stories

Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

More Stories

OUR UPCOMING EVENTS

8th April | In-person Conference | Hotel Radisson Blue, Bangalore

Organized by Analytics India Magazine

View Event >>

30th Apr | Virtual conference

Organized by Analytics India Magazine

View Event >>

MORE FROM AIM
Sreejani Bhattacharyya
The Winning Papers At NeurIPS 2021

Let us take a look at the recipients of the 2021 Outstanding Paper Awards, the Test of Time Award, and the new Datasets and Benchmarks Track Best Paper Awards.

Victor Dey
Understanding the AUC-ROC Curve in Machine Learning Classification

AUC-ROC is the valued metric used for evaluating the performance in classification models. The AUC-ROC metric clearly helps determine and tell us about the capability of a model in distinguishing the classes. The judging criteria being – Higher the AUC, better the model. AUC-ROC curves are frequently used to depict in a graphical way the connection and trade-off between sensitivity and specificity for every possible cut-off for a test being performed or a combination of tests being performed. The area under the ROC curve gives an idea about the benefit of using the test for the underlying question. AUC – ROC curves are also a performance measurement for the classification problems at various threshold settings. 

Krishna Rastogi
Hands-on Vision Transformers with PyTorch

ViT breaks an input image of 16×16 to a sequence of patches, just like a series of word embeddings generated by an NLP Transformers. Each patch gets flattened into a single vector in a series of interconnected channels of all pixels in a patch, then projects it to desired input dimension.

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM