MITB Banner

8 Hottest Machine Learning Papers From The 36th Edition Of ICML 

Share

With more than 14,000 papers submitted every year, the field of AI attracts one of the most productive research groups out there. These papers consist of groundbreaking research which can be applied into few auxiliary machine learning tasks to techniques on how to tweak the algorithms.

To acknowledge such noteworthy research, ICML(The International Conference on Machine Learning ) presents and publishes cutting-edge research on all aspects of machine learning.

Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

Here are few exciting works(in no particular order)with their acceptance into the prestigious machine learning conference ICML, which concluded last month at Long Beach, California:

1.Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations From Google

Disentangled in this context is the act of breaking down each feature into variables that is similar to reasoning at the human level. The commonly held notion of  unsupervised learning of Disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms. 

In this paper, the authors challenge this notion by theoretically showing  that the unsupervised learning of disentangled representations is fundamentally  impossible without inductive biases on both the models and the data. 

The results of this study indicates that though different methods successfully enforce properties “encouraged” by the corresponding losses,well-disentangled models seemingly cannot be identified without supervision.

2.Similarity of Neural Network Representations Revisited By Geoff Hinton et al.,

The idea behind this work is to provide a platform to understand how machine learning algorithms interact with data and what insights can be drawn from learning neural network representations.

In this paper, the authors prove that neither canonical correlation analysis (CCA) nor any other statistic that is invariant to invertible linear transformation can measure meaningful similarities between representations of higher dimension than the number of data points.

And, as a solution, they introduce a similarity index that measures the relationship between representational similarity matrices.

3.Collaborative Evolutionary Reinforcement Learning From Intel

This paper introduces Collaborative Evolutionary Reinforcement Learning(CERL), a framework to combine gradient-based and gradient-free learning. The explosion of reinforcement learning and its applications also brought the researcher’s attention towards the challenges like hyperparameter tuning and exploration of solution space in high dimensional data. To address this, CERL was introduced, which is aimed at exploiting the diverse regions in the solution space.

4.Generative Adversarial User Model for Reinforcement Learning Based Recommendation System From Georgia Inst. Of Technology

The researchers propose a novel model-based reinforcement learning framework for recommendation systems, where they develop a generative adversarial network (GAN) to imitate user behavior dynamics and learn her reward function.

The authors claim that this model, which can better explain user behavior than alternatives can lead to a better long-term reward for the user and higher click rate for the system.

5.Rates of Convergence for Sparse Variational Gaussian Process Regression From University of Cambridge

This work focuses on the computational complexity involved in the approximation of the results. 

The authors show that with high probability the KL divergence can be made arbitrarily small by growing M more slowly than N(number of training examples). 

Results show that as datasets grow, Gaussian process posteriors can truly be approximated cheaply, and provide a concrete rule for how to increase M(set of inducing variables) in continual learning scenarios.

6.HOList: An Environment for Machine Learning of Higher-Order Theorem Proving From Google

In this paper, the researchers at Google present an environment, benchmark, and deep

learning  driven automated  theorem prover for higher-order logic.

HOL Light comes with a broad coverage of basic mathematical theorems on calculus and the formal proof of the Kepler conjecture, from which we derive a challenging benchmark for automated reasoning. 

7.High-Fidelity Image Generation With Fewer Labels From Google

In this work, the authors demonstrate how one can benefit from recent work on self- and semi-supervised learning to outperform the state of the art on both unsupervised ImageNet synthesis, as well as in the conditional setting. In particular, the proposed approach is able to match the sample quality (as measured by FID) of the current state-of-the-art conditional model BigGAN on ImageNet using only 10% of the labels and outperform it using 20% of the labels.

8.EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

Convolutional Neural Networks(CNNs) are at the heart of many machine vision applications. From tagging photos online to self driving cars, CNNs have proven to be of great help.

As the number of applications involving CNNs increase, the need for improving them has risen as well.

So, to balance this trade-off, the researchers at Google, introduce a more principled approach to scale CNNs and make them accurate.

In the paper titled  “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”, the authors propose a family of models, called EfficientNets, which they believe to superpass state-of-the-art accuracy with up to 10x better efficiency (smaller and faster).

In this work, the authors propose a compound scaling method which has more control over model performance by having constraints on the scaling coefficients. In other words when to increase or decrease depth, height and resolution of a certain network.

This year’s ICML also got attention for its number of papers accepted with code. Papers with code ensure authenticity and enable reproducibility, which is or should be the underlying objective of any research. With such high numbers, the 36th edition of this machine learning conference proved once again why it is one of the most famous events of the year

Check out other accepted papers here

Share
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.