Researchers making a racket about the paper review process have become a common sight at academic conferences. The old chestnuts being: How do reviewers judge the novelty of a paper? How did a similar work get better reviews? Given the low acceptance rates and highly coveted conferences, the researchers can be incentivised to pursue unethical practices. The situation got so bad that the Association for Computing Machinery(ACM) had to make a statement.
In a recently published article on the ACM communications portal, ACM fellow and renowned machine learning researcher Michael L. Littman expressed his discontent over alleged large-scale gaming of reviewing systems within the computer science(CS) community. “I want to alert the community to a growing problem that attacks the fundamental assumptions that the review process has depended upon,” stated Littman in the ACM’s press release.
Within the first two weeks of December, last year, arxiv portal, the popular repository for machine learning research papers, witnessed close to 600 uploads.
- CVPR had 1,470 research papers on computer vision accepted from 6,656 valid submissions.
- ICLR had 687 out of 2594 papers made it to ICLR 2020 — a 26.5% acceptance rate.
- ICML had 1088 papers accepted from 4990 submissions.
Last year, a PhD candidate Huixian Chen from the University of Florida took his life alleging ‘academic dishonesty’ amongst his peers, especially his doctoral advisor, who also happens to be the external reviewer at one of the prestigious conferences. The investigation team headed by ACM-IEEE concluded there was no evidence of misconduct despite the news that Chen’s computer had hundreds of papers with all the details. These papers are supposed to be double-blinded; neither names of authors or reviewers can be known. The whole episode has now cascaded into an avalanche of fraudulent activities within the highly revered research community. In his article, Littman explained how the collusion rings operate:
- A group of authors collude, write and submit papers to the conference.
- Colluders share the titles of each other’s papers, violating the tenet of blind reviewing and creating a significant undisclosed conflict of interest.
- Colluders hide conflicts of interest, then bid to review these papers, sometimes from duplicate accounts so that they will be assigned as reviewers to these papers.
- Colluders hype these papers with positive reviews.
- Colluders threaten reviewers who don’t collude.
- Few colluding reviewers even temporarily change their names to escape the risk of being associated with ill-reputed papers.
“The quality, and perhaps even more importantly, the overall integrity, of the conference suffers as a result.”
Compared to other domains, the trouble with CS research is the rapid pace at which sub-domains like machine learning move. This makes it difficult to be in touch with what is really novel about certain research. A reviewer working on niche areas might be impressed with something cutting edge, which can seem hyperbolic to others. PhD and postdoc candidates have an obligation to tick all the boxes; getting your work to be accepted in a prestigious conference is mandatory for a decent career. The whole ‘gaming reviews’ looks almost inevitable considering all these factors.
One of the investigative officers in Chen’s case anonymously admitted the investigators found that the SIG community had a collusion problem. The investigators found that a group of PC members and authors colluded to bid and push for each other’s papers violating the usual conflict-of-interest rules. On manual analysis, the investigators found many such cases across different conferences spanning many years. In his article, Littman demands better investigative tools to safeguard the integrity of the conferences. Whereas, few ML practitioners even suggest moving beyond a binary acceptance-rejection model of the open reviews and leverage the advantages of machine learning models to make personalised paper recommendations, with options to sort by reviewer score, relevance, and other metrics. Few even went further in suggesting a social media setup where papers get upvoted or downvoted anonymously. But, there is a problem with this as well. Social mediafication of academic processes can invite malicious players(think bots) that can put some trivial research on top just because some researcher with a higher h-index has upvoted it.
It is clear the review process is broken and needs a fix. The esteemed research committees might even find a solution too. But, among the noise of citation bubbles, collusions and academic dishonesty, will the real research get lost? Academic rigour is essential, but how valid is it for a breakthrough? What are the chances that a radical paper (think: Einstein’s annus mirabilis) is put down by reviewers who are accustomed to reviewing certain kinds of papers?