Advertisement

Top 10 Machine Learning Papers of 2022

The best of everything and anything released in ML!
Listen to this story

The relevance of any field depends on the ongoing research and studies around it. This especially holds for advancing fields like machine learning. 

To bring you up to speed on the critical ideas driving machine learning in 2022, we handpicked the top 10 research papers for all AI/ML enthusiasts out there!

Let’s dive in!

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.
  1. Artificial Replay: A Meta-Algorithm for Harnessing Historical Data in Bandits

Author(s) Sean R. Sinclair et al.

Ways to incorporate historical data are still unclear: initialising reward estimates with historical samples can suffer from bogus and imbalanced data coverage, leading to computational and storage issues—particularly in continuous action spaces. The paper addresses the obstacles by proposing ‘Artificial Replay’, an algorithm to incorporate historical data into any arbitrary base bandit algorithm. 


Download our Mobile App



Read the full paper here

  1. Bootstrapped Meta-Learning 

Author(s) Sebastian Flennerhag et al.

The paper proposes an algorithm in which the meta-learner teaches itself to overcome the meta-optimisation challenge. The algorithm focuses on meta-learning with gradients, which guarantees performance improvements. Furthermore, the paper also looks at how bootstrapping opens up possibilities. 

Read the full paper here.

  1. LaMDA: Language Models for Dialog Applications

Author(s) Romal Thoppilan et al.

The research describes the LaMDA system which caused chaos in AI this summer when a former Google engineer claimed that it had shown signs of sentience. LaMDA is a family of large language models for dialogue applications based on Transformer architecture. The interesting feature of the model is its fine-tuning with human-annotated data and the possibility of consulting external sources. This is a very interesting model family, which we might encounter in many applications we use daily. 

Read the full paper here.

  1. Competition-Level Code Generation with AlphaCode

Author(s) Yujia Li et al.

Systems can help programmers become more productive. The following research addresses the problems with incorporating innovations in AI into these systems. AlphaCode is a system that creates solutions for problems that require deeper reasoning. 

Read the full paper here.

  1. Privacy for Free: How does Dataset Condensation Help Privacy?

Author(s) Tian Dong et al.

The paper focuses on Privacy Preserving Machine Learning, specifically deducting the leakage of sensitive data in machine learning. It puts forth one of the first propositions of using dataset condensation techniques to preserve the data efficiency during model training and furnish membership privacy.

Read the full paper here.

  1. Why do tree-based models still outperform deep learning on tabular data?

Author(s) Léo Grinsztajn, Edouard Oyallon and Gaël Varoquaux

The research answers why deep learning models still find it hard to compete on tabular data compared to tree-based models. It is shown that MLP-like architectures are more sensitive to uninformative features in data compared to their tree-based counterparts. 

Read the full paper here.

  1. Multi-Objective Bayesian Optimisation over High-Dimensional Search Spaces 

Author(s) Samuel Daulton et al.

The paper proposes ‘MORBO’, a scalable method for multiple-objective BO as it performs better than that of high-dimensional search spaces. MORBO significantly improves the sample efficiency and, where existing BO algorithms fail, MORBO provides improved sample efficiencies over the current approach. 

Read the full paper here.

  1. A Path Towards Autonomous Machine Intelligence Version 0.9.2

Author(s) Yann LeCun

The research offers a vision about how to progress towards general AI. The study combines several concepts: a configurable predictive world model, behaviour driven through intrinsic motivation, and hierarchical joint embedding architectures trained with self-supervised

learning. 

Read the full paper here

  1. TranAD: Deep Transformer Networks for Anomaly Detection in Multivariate Time Series Data

Author(s)   Shreshth Tuli, Giuliano Casale and Nicholas R. Jennings

This is a specialised paper applying transformer architecture to the problem of unsupervised anomaly detection in multivariate time series. Many architectures which were successful in other fields are, at some point, also being applied to time series. The research shows improved performance on some known data sets. 

Read the full paper here.

  1. Differentially Private Bias-Term only Fine-tuning of Foundation Models

Author(s) Zhiqi Bu et al. 

In the paper, researchers study the problem of differentially private (DP) fine-tuning of large pre-trained models—a recent privacy-preserving approach suitable for solving downstream tasks with sensitive data. Existing work has demonstrated that high accuracy is possible under strong privacy constraints yet requires significant computational overhead or modifications to the network architecture.

Read the full paper here

More Great AIM Stories

Tasmia Ansari
Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.

AIM Upcoming Events

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Early Bird Passes expire on 10th Feb

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
AIM TOP STORIES