This year, the thirty-fourth annual conference on Neural Information Processing Systems, NeurIPS 2020 is going to be held virtually from 6th to 12th December. The paper submissions for this year is 38% more than last year, i.e. a total of 1,903 papers were accepted, compared to 1,428 last year.
The review period of the papers began in July, and in August, the popular AI conference, NeurIPS has sent out the paper reviews for this year’s conference. This has brought the popular machine learning event once again amid the controversies as it has been claimed that the reviews of the papers are “terrible”, i.e. either they are not clear, or the sentences were incomplete by the reviewers, among others.
In a tweet, Director of AI Research at NVIDIA and California Institute of Technology Professor, Anima Anandkumar shed light on the paper reviews. Anandkumar said that she has been witnessing terrible paper reviews for the “nth time” and it is worthy of banning the reviewers from submitting as authors “if they can’t write a review in good faith”.
Replying to this tweet, an alumnus of IISc Bangalore, Suraj Krishnamurthy stated that the reviews of his paper were worse, and there were “absolutely no takeaway messages from the reviewers.”
Google AI researcher, Rohan Anil mentioned in a tweet that the reviews are poor in nature with incomplete sentences by the reviewers.
Some of the machine learning researchers suggested a few solutions that can help in solving the “terrible” reviews. Christian Szegedy, an AI research scientist, suggested that the system should propose a different review process. One suggested way that can improvise the system is by decoupling the task of judging and analysing the papers.
Szegedy added, “I would suggest a review process in which in the first phase, the (fewer) reviewers’ task is to analyse the paper. The reviewers should have the opportunity to give an enthusiastic recommendation of acceptance or reject if they feel strongly, but are not forced to rate at all.”
Replying to his own tweet, Szegedy further mentioned that in the second phase, the reviewers should stack rank the paper based on the first round’s analysis. This would be useful for borderline papers, for which the reviews are lukewarm.
While Victor Zakhary, SWE at Oracle tweeted that the system should have something similar to Arxiv with public reviews in a decentralised manner or utilise a blockchain-like solution where papers reviews are public and tied to their writers.
However, some of the machine learning enthusiasts had a different take for the review system. In a comment, a Reddit user mentioned that the “NeurIPS reviews received by the user were “actually really fair and detailed for the most part.” The user also mentioned that the problem is NeurIPS wants publications to be new and not published elsewhere (except Arxiv).
Also, Graham Neubig, Associate professor at CMU, mentioned in a tweet that 70-80% of the reviews at NeurIPS2020 were reasonable and fixing any complaints will make the papers better.
This is not the first time that controversies have scarred the reputation of the conference. In other words, it can be said that controversies are not a new thing for this popular machine learning conference.
In 2018, the organisers of the “NIPS” (Neural Information Systems Processing) conference had changed the event’s name from NIPS to NeurIPS after heading into a controversy about whether “NIPS” is an offensive name or not. According to sources, signed a letter calling on NIPS to be rebranded following reports of inappropriate behaviour. The letter stated that the “acronym of the conference is prone to unwelcome puns.”
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box. Contact: firstname.lastname@example.org