ML Research Papers From India Accepted At ICML 2021

For this year’s conference, held online from July 18-24, the number of accepted papers stood at 1184.
ML Research Papers From India Accepted At ICML 2021

ICML is a renowned platform for presenting and publishing cutting-edge research on all aspects of machine learning, statistics and data science. 

Last year, ICML Conference attracted 4,990 submissions, with 1088 approved papers at a 21.8 percent acceptance rate. For this year’s conference, held online from July 18-24, the number of accepted papers stood at 1184

Below, we have listed research papers from India accepted at ICML 2021. 

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

RRL: Resnet as representation for Reinforcement Learning 

Researchers from the Indian Institute of Technology (IIT), Kharagpur, in collaboration with the University of Washington and Facebook AI — Rutav Shah and Vikash Kumar, proposed a straightforward yet effective approach that can learn complex behaviours directly from proprioceptive inputs. RRL fuses features extracted from pre-trained Resnet into the standard reinforcement learning pipeline and delivers results comparable to learning directly from the state. 

In a simulated dexterous manipulated benchmark, where the SOTA methods fail to make significant progress, researchers noted their RRL delivers contact rich behaviours. 

Download our Mobile App

Check out the full research paper here

On Characterizing GAN Convergence Through Proximal Duality Gap

In this work, IIT Ropar researchers Sahil Sidheekh, Aroof Aimen, and Narayanan C Krishnan extended the notion of duality gap to proximal duality gap that applies to the general context of training Generative Adversarial Networks (GANs), where Nash equilibria may not exist. 

The researchers theoretically showed the proximal duality gap could monitor the convergence of GANs to a wider spectrum of equilibria that subsumes Nash equilibria. They also established the relationship between the proximal duality gap and the divergence between the real and generated data distributions for different GAN formulations. The result provided new insights into the nature of GAN convergence and validates the usefulness of the proximal duality gap for monitoring and influencing GAN training. 

Check out the research paper and ICML 2021 presentation slides here

SiameseXML: Siamese Networks meet Extreme Classifiers with 100M Labels

Researchers from IIT Delhi, in partnership with Microsoft Research and IIT Kanpur, developed the SiameseXML framework based on a novel probabilistic model that naturally motivates a modular approach melding Siamese architecture with high-capacity extreme classifiers, and a training pipeline that effortlessly scales to tasks with 100 million labels. The research team included Kunal Dahiya, Ananye Agarwal, Deepak Saini, Gururaj K, Jian Jiao, Amit Singh, Sumeet Agarwal, Puru Kar, and Manik Varma.

The proposed technique offers 2 to 13 percent more accurate predictions than leading XML methods on public benchmark datasets, as well as in live A/B tests on the Bing search engine. It also offers significant gains in click-through rates, coverage, revenue and other online metrics over SOTA techniques currently in production. 

The source code for SiameseXML is available on GitHub. Also, check out the research paper here

Bayesian Structural Adaptation for Continual Learning 

At present, (i) variational Bayes based regularisation by learning priors from previous tasks, and (ii) learning the structure of deep networks to adapt to new tasks have been the recent advances in continual learning with neural networks. The two approaches are orthogonal. 

Addressing the shortcoming of both these approaches, researchers from IIT, Kanpur — Abhishek Kumar, Sunabha Chatterjee and Piyush Rai– presented a novel Bayesian approach to continual learning. The proposed model learns the deep structure for each task by learning which weights to be used and supports inter-task transfer by overlapping different sparse subsets of weights learned by various tasks. 

Further, experimental results on supervised and unsupervised benchmarks showed the model performed comparably or better than recent advances in a continual learning setting. 

Check out the research paper here

Training Data Subset Selection for Regression with Controlled Generalization Error 

Researchers from IIT Bombay, in collaboration with UT Dallas, designed an algorithm for selecting a subset of the training data so that the model can be trained quickly, without compromising on accuracy. The paper was co-authored by Durga Sivasubramanian, Rishab Iyer, Ganesh Ramakrishnan, and Abir De.

The researchers focused on data subset selection for L2 regularised regression problems and provided a novel problem formulation that looks to minimize the training loss with respect to both the trainable parameters and the subset of training data, subject to error bounds on the validation set. 

Further, the researchers simplified constraints using the dual of the original training problem and showed the objective of this new representation is a monotone and α-submodular function, for a wide variety of modeling choices. It inevitably led them to develop SELCON, an algorithm for data subset selection. 

Check out the full research paper here

GRAD-MATCH: Gradient Matching based Data Subset Selection for Efficient Deep Model Training

Researchers from IIT Bombay and the University of Texas at Dallas — Krishnateja Killamsetty, Sivasubramanian, Ramakrishnan, De and Rishabh Iyer, proposed a general framework called GRAND-MATCH. The framework finds subsets that closely match the gradient of the training or validation set. 

With extensive experiments on real-world datasets, researchers have shown the effectiveness of their proposed framework. GRAND-MATCH significantly and consistently outperformed several recent data-selection algorithms and achieved the best accuracy-efficiency trade-off.

The code is available on GitHub. Check out the research paper here

Adversarial Dueling Bandits 

A group of researchers from the Indian Institute of Science (IISc), Bengaluru, in partnership with Tel Aviv University and Google Tel Aviv introduced the problem of regret minimization in adversarial dueling bandits. The team included Aadirupa Saha, Tomer Koren and Yishay Mansour. 

Check out the full research paper here

Finding k in Latent k− Polytope

Researchers from IISc, Bengaluru, along with Microsoft Research India Lab and IIT Delhi addressed challenges around finding k in Latent k- Polytope (LkP) by introducing interpolative convex rank (ICR) of a matrix defined as the minimum number of its columns whose convex hull is within Hausdorff distance ε of the convex hull of all columns. The team comprised Chiranjib Bhattacharyya, Ravi Kannan and Amit Kumar.

Check out the full research paper here

Domain Generalization using Causal Matching

Researchers from Microsoft Research, India, alongside Microsoft Research, UK, proposed matching-based algorithms when base objects are observed (for example, through data augmentation) and approximated the objective when objects are not observed (MatchDG). The team comprised Divyat Mahajan, Shruti Tople, and Amit Sharma,

The researchers found their simple matching-based algorithms were competitive to prior work on out-of-domain accuracy for rotated MNIST, Fashion-MNIST, PACS, and Chest-X Ray datasets. Plus, their MatchDG recovered ground truth object matches: on MNIST and Fashion-MNIST, top-10 matches from MatchDG had over 50% overlap with ground-truth matches. 

Check out the research paper and ICML 2021 presentation slides here

Sign up for The Deep Learning Podcast

by Vijayalakshmi Anandan

The Deep Learning Curve is a technology-based podcast hosted by Vijayalakshmi Anandan - Video Presenter and Podcaster at Analytics India Magazine. This podcast is the narrator's journey of curiosity and discovery in the world of technology.

Amit Raja Naik
Amit Raja Naik is a seasoned technology journalist who covers everything from data science to machine learning and artificial intelligence for Analytics India Magazine, where he examines the trends, challenges, ideas, and transformations across the industry.

Our Upcoming Events

24th Mar, 2023 | Webinar
Women-in-Tech: Are you ready for the Techade

27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023 [AI100 Awards]

21 Jul, 2023 | New York
MachineCon USA 2023 [AI100 Awards]

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

Is Foxconn Conning India?

Most recently, Foxconn found itself embroiled in controversy when both Telangana and Karnataka governments simultaneously claimed Foxconn to have signed up for big investments in their respective states