The International Conference on Machine Learning is just around the corner; however, it’s trending for a different reason: Rejection. The AI/ML researchers are taking to social media to question the selection process of papers.
The 39th edition of the (ICML) is slated to be held in Baltimore, Maryland, from July 17 to 23. Last year, 5,513 papers were submitted, 10 percent higher than the previous year’s submission. Out of the 5513 submissions, 1,184 were accepted with an acceptance rate of 21.5 percent as opposed to the 21.8 percent year before.
Yann LeCun, chief AI scientist at Meta, tweeted, “If I go by tweet statistics, ICML has rejected every single paper this year.” Interestingly, he had three of his papers rejected.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
ICML is a renowned platform for presenting and publishing cutting-edge research on machine learning, statistics and data science. Let’s understand the submission and selection process first.
Submission and selection process
The researchers submit their papers through Microsoft’s Conference Management Toolkit (CMT). While submissions opened on January 12, 2022, the date for author notification was May 14 (Saturday). ICML adheres to their deadlines, and according to the website, doesn’t extend the dates under any circumstances.
Download our Mobile App
This year, ICML has announced three changes to reviewing, paper formatting, and submission processes. Firstly, from now on, reviews will take place in two phases. Secondly, authors need to prepare and submit their papers as a single file, and lastly, ICML said there would be no separate deadline for submitting supplementary material.
To be accepted, the papers must be based on original research. The results can be theoretical or empirical; however, the outcomes must contain significant novel results of significant interest to the ML community.
The reviewing panel will judge the paper on the degree to which they have been objectively established. Further, their potential for scientific and technological impact will also be taken into account. Reproducibility of results and easy availability of code will be considered in the decision-making process, whenever appropriate.
Why are authors unhappy?
Even though ICML has a low acceptance rate, many authors are unhappy with the reviewing process.
“Though some of my papers got accepted, my favourite submission was rejected by ICML simply because “the reviews are not very insightful, unfortunately”, quoted from the 1st sentence in meta-review. Why should the authors pay for the low review quality? Ridiculous review system,” said Hongyang Zhang, assistant professor at Waterloo’s Cheriton School of Computer Science.
Yi Ma, Professor, Electrical Engineering and Computer Sciences at the University of California, Berkeley tweeted, “My problem is that a much-better-prepared paper with very positive reviews got rejected and a paper with dubious reviews accepted.”
He said ICML meta reviews are just arbitrary. “In rebuttal, we did exactly what reviewers asked, but AC rejected, saying: “it is not certain whether this new experiment performed in a short period of time was done accurately.”
“If you can reject by simply not trusting rebuttal, why have it?” he asked.
AI researcher and engineer Charles Martin took to LinkedIn to share his disappointment about his paper being rejected. He, too, like the other authors, questioned the review process.
“Got our ICML rejection letters last night for our latest weight watcher papers. The general theme is that the reviewers don’t understand the theory so they won’t accept empirical studies. Does anyone really understand why deep learning works?” he posted.
Leon Palafox , head of AI at Algorithia/Grupo Salinas gave his two cents on the ICML rejection spree. “With so many great researchers having all of their submissions rejected from ICML, I guess this conference is going to be either a groundbreaking event or just a random sample not really representative of the best papers,” he said.