Now Reading
Which Are The Top Winning Papers At NeurIPS 2020

Which Are The Top Winning Papers At NeurIPS 2020

NeurIPS 2020 Winners

The NeurIPS 2020 annual conference has finally reached its conclusion with the announcement of this year’s winners of best papers. The 34th edition of this prestigious conference declared three winners. OpenAI’s GPT-3, arguably the most talked-about innovation won the top honours with papers on No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium by Politecnico di Milano and Carnegie Mellon University, and Improved Guarantees and a Multiple-Descent Curve for Column Subset Selection and the Nystrom Method by University of California, Berkeley.

The number of papers submitted was about 38% more than the last, which is 1,903 as compared to 2019’s 1,428. From Google alone, about 40 research papers were accepted.

The NeurIPS 2020 best paper awards were selected by a jury comprising of computer science professor, Nicolò Cesa-Bianchi; professor of the Department of Electrical & Computer Engineering at Northeastern University, Jennifer Dy; Surya Ganguli, Masashi Sugiyama, and research director at FAIR, Laurens van der Maaten. In this article, we learn about the winning papers and what factors had set them apart from other competitors.

Language Models Are Few Shot Learners

Authored by over 30 researchers from OpenAI, this paper demonstrated how the autoregressive language model GPT-3 was trained with 175 billion parameters, which is 10X more than any previous non-sparse language model. The model was tested in the few-shot setting, which resulted in showcasing GPT-3’s strong performance on several NLP datasets, including translation, question-answer, and cloze tasks. Along with this, the paper also noted the model’s shortcomings with respect to its struggle to perform on some datasets.

As per the jury at NeurIPS, this study effectively showed that when the language models are scaled to a large number of parameters, it can achieve competitive performance on many problems of natural language processing without the requirement of any additional training. They also noted that the paper presented a ‘very extensive and thoughtful exposition’ of the larger impact of the work, which may have an influence on the NeurIPS community to take cognisance of the real-world impact of research. “This is a very surprising result that is expected to have a substantial impact in the field, and that is likely to withstand the test of time,” the jury added.

The paper can be downloaded from here.

No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium

Authored by the researchers from Politecnico di Milano and Carnegie Mellon University, this study demonstrates the existence of ‘regret-minimising algorithms’ for correlated equilibrium. For the uninitiated, correlated equilibrium is a game-theoretic solution concept which is a superset of the better-known Nash equilibrium. The idea is to let each player chooses their action as per their observation of the value of the same public signal.

This study is touted to solve the long-standing problem related to game theory, computer science, and even economics. One possible use case can also be efficient traffic routing through navigation maps.

The jury at NeurIPS observed that finding automated procedures for establishing equilibria, as shown by this study, is no mean feat and that this study is the first approach for finding correlated equilibria for general interactions through learning approach.

Download our Mobile App

Read the full paper here.

See Also

Improved Guarantees and a Multiple-Descent Curve for Column Subset Selection and the Nystrom Method

This study by researchers from the University of California, Berkeley demonstrates techniques that exploit the spectral properties of the data matrix to get improved approximation guarantees. These results go beyond the standard worst-case analysis.

The NeurIPS judges noted that since the approximation techniques are widely adopted in machine learning, this study may have a substantial impact on kernel methods, feature selection, and double-descent behaviour of neural networks.

The full paper can be found here.

Test Of Time Award

In keeping with the tradition of NeurIPS, the jury also announced the test of the time award. This recognition is bestowed upon previous winners of NeurIPS which went on to make significant contributions.
This year, this was awarded to the paper titled — HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent, published in NeurIPS 2011.

What's Your Reaction?
In Love
Not Sure

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top