Active Hackathon

Interesting Research Papers Presented By Meta AI At NeurIPS 2021

The article explores the top ten featured research publications by Meta AI at NeurIPS 2021

Meta AI researchers will be presenting a total of 83 papers at NeurIPS 2021. NeurIPS 2021 is the Thirty-fifth Conference on Neural Information Processing Systems held from December 6 to 14. The virtual conference boasts 2334 papers, 60 workshops, eight keynote speakers, and 15k+ attendees. In 2020, Meta AI (previously Facebook) presented 48 research papers at Neurips 2020. This article highlights the top 10 featured research publications by Meta at NeurIPS 2021.

1. Unsupervised Speech Recognition

Authors: Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, Michael Auli


Sign up for your weekly dose of what's up in emerging technology.

The research paper highlights the limitations of data labelling training for ML models. Therefore, the authors put forth in its place, wav2vec-U, short for wav2vec Unsupervised, a method to train speech recognition models without any labelled data. For this, the authors leverage self-supervised speech representations to segment unlabeled audio and learn a mapping from these representations to phonemes via adversarial training.

Find the research paper here.

2. Bandits with Knapsacks beyond the Worst-Case Analysis

Authors: Karthik Abinav Sankararaman, Aleksandrs Slivkins

In this research, the authors present three original results beyond the worst-case scenario analysis of Bandit with Knapsacks (BwK). The results are built on the BwK algorithm from Agrawal and Devanur (2014), providing new analyses thereof. Firstly, the authors provide upper and lower bounds for a complete characterisation of logarithmic rates. Secondly, the authors consider the “simple regret” in BwK which tracks the algorithmic performance to prove that it is small in all but a few rounds. Finally, the authors provide a general “reduction” from BwK to leverage helpful structures and apply reduction to semi-bandits. 

Find the research paper here

3. Volume Rendering of Neural Implicit Surfaces

Authors: Lior Yariv, Jiatao Gu, Yoni Kasten, Yaron Lipman

This research paper on computer vision and core machine learning improves geometrical representation and reconstruction in neural volume rendering. The authors take the approach of modelling the volume density as a function of the geometry to produce high-quality geometry reconstructions, outperforming relevant baselines. 

Find the research paper here.

4. Parameter Prediction for Unseen Deep Architectures

Authors: Boris Knyazev, Michal Drozdzal, Graham Taylor, Adriana Romero Soriano

The authors believe that deep learning has successfully automated the design of features in machine learning pipelines. However, the algorithms optimising neural network parameters remain largely hand-designed and computationally inefficient. Therefore, the authors study whether they can use deep learning to predict these parameters directly by exploiting the past knowledge of training other networks.

Find the research paper here

5. Learning Search Space Partition for Path Planning

Authors: Kevin Yang, Tianjun Zhang, Chris Cummins, Brandon Cui, Benoit Steiner, Linnan Wang, Joseph E. Gonzalez, Dan Klein, Yuandong Tian

The research paper develops a novel formal regret analysis for when and why an adaptive region partitioning scheme works. Furthermore, the authors propose a new path planning method, LaP3, which improves the function value estimation within each sub-region and uses a latent representation of the search space. 

Find the research paper here.

6. Antipodes of Label Differential Privacy: PATE and ALIBI

Authors: Mani Malek, Ilya Mironov, Karthik Prasad, Igor Shilov, Florian Tramer

The authors propose two novel approaches to preferential privacy ML where the trained model satisfies differential privacy with respect to labels of the training examples. The two approaches, the Laplace mechanism and the PATE framework, demonstrate their effectiveness on standard benchmarks. 

Find the research paper here.

7. NovelD: A Simple yet Effective Exploration Criterion

Authors: Tianjun Zhang, Huazhe Xu, Xiaolong Wang, Yi Wu, Kurt Keutzer, Joseph E. Gonzalez, Yuandong Tian

The research paper presents NovelD as an alternative exploration method to RND. The criterion called NovelD weighs every novel area approximately equally. The authors believe that the algorithm is very simple yet shows comparable performance or even outperforms multiple SOTA exploration methods in many hard exploration tasks. The researchers have also discovered that NovelD outperforms RND in many Atari Games. 

Find the research paper here

8. Luna: Linear Unified Nested Attention

Authors: Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, Luke Zettlemoyer

The research paper proposes Luna, a linear unified nested attention mechanism that approximates softmax attention with two nested linear attention functions, yielding only linear (as opposed to quadratic) time and space complexity. The alternative to traditional attention mechanisms, Luna, introduces an additional sequence with a fixed length as input and an additional corresponding output, allowing Luna to perform attention operation linearly while storing adequate contextual information.  

Find the research paper here.

9. Interesting Object, Curious Agent: Learning Task-Agnostic Exploration

Author: Simone Parisi, Victoria Dean, Deepak Pathak, Abhinav Gupta

In this research paper on reinforcement learning, the authors propose a paradigm change in the formulation and evaluation of task-agnostic exploration. The authors suggest that the agent first learn to explore many environments without any extrinsic goal in a task-agnostic manner. Later on, the agent effectively transfers the learned exploration policy to explore new environments better when solving tasks.

Find the research paper here.

10. DOBF: A Deobfuscation Pre-Training Objective for Programming Languages

Authors: Baptiste Rozière, Marie-Anne Lachaux, Marc Szafraniec, Guillaume Lample

In this research paper, the authors introduce a new pre-training objective, DOBF, that leverages the structural aspect of programming languages and pre-trains a model to recover the original version of obfuscated source code. The authors demonstrate how models pre-trained with DOBF significantly outperform existing approaches on multiple downstream tasks. The authors, during their research, also discovered that their pre-trained model could deobfuscate fully obfuscated source files, and suggest descriptive variable names.

Find the research paper here.

More Great AIM Stories

Abhishree Choudhary
Abhishree is a budding tech journalist with a UGD in Political Science. In her free time, Abhishree can be found watching French new wave classic films and playing with dogs.

Our Upcoming Events

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM

Council Post: Enabling a Data-Driven culture within BFSI GCCs in India

Data is the key element across all the three tenets of engineering brilliance, customer-centricity and talent strategy and engagement and will continue to help us deliver on our transformation agenda. Our data-driven culture fosters continuous performance improvement to create differentiated experiences and enable growth.

Ouch, Cognizant

The company has reduced its full-year 2022 revenue growth guidance to 8.5% – 9.5% in constant currency from the 9-11% in the previous quarter