Best ML Papers Presented At Major Conferences Of 2021

The Best Papers at AI and ML Conferences in 2021

The year is about to end, and all the major AI and ML conferences of 2021 took place online. The conferences set a high standard and were quite promising when it came to the presentation of major papers from researchers across the globe. We have covered all the major conferences throughout the year, and picking up the best is no cakewalk. Some of the papers are well-known to the domain experts, while some are considered as more of hidden gems.

Here we present some of the best papers presented at major conferences of 2021:

1| Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies at ICML 2021

By: The research paper by Paul Vicol, Jascha Sohl-Dickstein, and Luke Metz from Google Brain and the University of Toronto has won the outstanding paper award. 

About: The paper introduced a Persistent Evolution Strategies (PES) method for unbiased gradient estimation in untolled computation graphs. PES allows for rapid parameter updates. In addition, it has low memory usage, is unbiased, and has reasonable variance characteristics. Researchers experimentally demonstrate the advantages of PES compared to several other methods for gradient estimation on synthetic tasks and show its applicability to training learned optimisers and tuning hyperparameters.

2| Oops I Took A Gradient: Scalable Sampling for Discrete Distributions at ICML 2021

By: The paper by researchers from Google Brain, including Will Grathwohl, Milad Hashemi, Kevin Swersky, David Duvenaud and Chris J. Maddison, has received the outstanding paper honourable mention at the conference.

About: Researchers proposed a general and scalable approximate sampling strategy for probabilistic models with discrete variables. The newly introduced approach uses gradients of the likelihood function wrt its discrete inputs to propose updates in a MetropolisHastings sampler.

3| OpenGAN: Open-Set Recognition via Open Data Generation at ICCV 2021

By: Researchers including Shu Kong and Deva Ramanan from Carnegie Mellon University developed OpenGAN for open-set recognition. Both the researchers incorporated two technical insights:

  • They trained a classifier on OTS characteristics rather than pixels, and
  • Also, they focused on adversarially synthesising fake open data to increase the open-training data pool.

About: With OpenGAN, the team shows that using GAN-discriminator achieves the state-of-the-art on open-set discrimination, once selected using a val-set of real outlier examples. This is effective even when the outlier validation examples are sparsely sampled or strongly biased. OpenGAN significantly outperforms prior art on both open-set image recognition and semantic segmentation.

4| Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields

By: The paper from researchers including Jonathan T. Barron, Peter Hedman, Pratul P. Srinivasan, Ricardo Martin-Brualla, Ben Mildenhall from Google and Matthew Tancik from UC Berkeley introduced a solution which they call “mip-NeRF.”

About: The team presented mip-NeRF — a multiscale neural radiance fields (NeRF) like model that addresses the inherent aliasing of NeRF. NeRF works by casting rays, encoding the positions of points along with those rays, and training separate neural networks at distinct scales. In contrast, mip-NeRF casts cones that encode the positions and sizes of conical frustums and train a single neural network that models the scene at multiple scales.

5| Visually Grounded Reasoning across Languages and Cultures at EMNLP 2021

By: Researchers from the University of Cambridge, University of Copenhagen, McGill University and Mila – Quebec Artificial Intelligence Institute presented the paper and bagged the best long paper award.

About: The team created a multilingual dataset for Multicultural Reasoning over Vision and Language (MaRVL) by eliciting statements from native speaker annotators about pairs of images. The task consists of discriminating whether each grounded statement is true or false. Researchers establish a series of baselines using state-of-the-art models and find that their cross-lingual transfer performance lags dramatically behind supervised performance in English. 

6| CHoRaL: Collecting Humor Reaction Labels from Millions of Social Media Users at EMNLP 2021

By: The paper presented by Zixiaofan Yang, Shayan Hooshmand, and Julia Hirschberg from the Department of Computer Science at Columbia University won the best short paper award.

About: CHoRaL is a framework to generate perceived humour labels on Facebook posts, using the naturally available user reactions to these posts with no manual annotation needed. It provides both binary labels and continuous scores of humour and non-humour. The team presented the largest dataset to date with labelled humour on 785K posts related to COVID-19. CHoRaL enables the development of large-scale humour detection models on any topic and opens a new path to the study of humour on social media.

7| Meta Pseudo Labels at CVPR 2021

By: Researchers from the Google AI Brain team, including Hieu Pham, Qizhe Xie, Zihang Dai, Minh-Thang Luong and Quoc V. Le, introduced this semi-supervised learning technique.

About: The model presented in the paper has achieved a new state-of-the-art top-1 accuracy of about 90.2% on ImageNet. The result is 1.6 per cent better than the existing SOTA models. The key to the model is the idea that the teacher learns from the student’s feedback in order to generate pseudo labels that best help students’ learning. The learning process in Meta Pseudo Labels consists of two main updates: updating the student based on the pseudo labelled data produced by the teacher and updating the teacher based on the student’s performance.

8| RRL: Resnet as representation for Reinforcement Learning

By: The paper presented by researchers Rutav Shah and Vikash Kumar from the Indian Institute of Technology (IIT), Kharagpur, was produced in collaboration with the University of Washington and Facebook AI.
About: The team proposed a straightforward and effective approach that is capable of learning complex behaviours directly from proprioceptive inputs. RRL fuses the features extracted from pre-trained Resnet and put it into the standard RL pipeline, and delivers results comparable to learning directly from the state.

Download our Mobile App

kumar Gandharv
Kumar Gandharv, PGD in English Journalism (IIMC, Delhi), is setting out on a journey as a tech Journalist at AIM. A keen observer of National and IR-related news.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR