MITB Banner

Watch More

Springer Nature Asked Not To Publish A Deep Learning Paper

“Machine learning does not have a built-in mechanism for investigating or discussing the social and political merits of its outputs.”

Two thousand two hundred twelve expert researchers and practitioners across a variety of technical, scientific, and humanistic fields, including statistics, machine learning and artificial intelligence, law, sociology, history, communication studies and anthropology have joined hands to sign a petition to stop Springer Nature from publishing a potentially malicious research paper.

The petition demanded the publisher to consider the following:

  • Review committee should publicly rescind the offer for the publication of this specific study, along with an explanation of the criteria used to evaluate it.
  • Springer should issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivising such harmful scholarship in the past.
  • All publishers to refrain from publishing similar studies in the future.

The argument here is that the uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalize social hierarchies and legitimise violence against marginalised groups.

About The Paper

A group of Harrisburg University professors and a PhD student developed automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal. Their paper titled, “A Deep Neural Network Model to Predict Criminality Using Image Processing” was supposed to be published in a book series, titled “Springer Nature – Research Book Series: Transactions on Computational Science & Computational Intelligence.”

“This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality.”

Prof. Roozbeh Sadeghian, one of the authors

This research was touted to assist law enforcement in identifying the criminality of a person from their facial image to prevent crimes from occurring in their designated areas. The press release was followed by the petition, and immediately after the publisher confirmed in a tweet that they would not be publishing the paper.

Update: Springer Nature’s Director Communication Renate Bayaz reached out to Analytics India Magazine and provided the following statement:

“We acknowledge the concern regarding this paper and would like to clarify at no time was this accepted for publication. It was submitted to a forthcoming conference for which Springer will publish the proceedings of in the book series Transactions on Computational Science and Computational Intelligence and went through a thorough peer review process.  The series editor’s decision to reject the final paper was made on Tuesday 16th June and was officially communicated to the authors on Monday 22nd June. The details of the review process and conclusions drawn remain confidential between the editor, peer reviewers and authors.”

The statement says that the process and conclusions drawn remain confidential. Open peer review is a standard practice in the machine learning community. The feedback of the reviewers not only help the concerned authors but also present great insights for other researchers to avoid repetition of a mistake. In the long run, closed door criticisms in a scientific domain will only fuel more skepticism within the community and might even impede the advancement of a field such as AI, which has just started to flourish.

A Minority Report In The Making

Tom Cruise in the movie Minority Report

This self-critique must be integrated as a core design parameter, not a last-minute patch. 

The prevalence of bias in algorithmic solutions is nothing new. The infamous recidivism algorithm, COMPAS (which stands for Correctional Offender Management Profiling for Alternative Sanctions), a tool used for finding criminals who would repeat an offence was found to be unfair to the black defendants as they were far more likely than white defendants to be incorrectly judged for higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk.

Researchers are yet to figure out a way to keep the biases from creeping into the machine learning models. The ‘how’ of it still largely remains to be unknown. Using machine learning for non-critical applications such as finding friends in a group photo is harmless but if the same model is tasked with finding certain features of the face to assess the potential criminality is an obvious ethical flop show about to blow in the face. 

The intent of the authors might have been benevolent. They wanted to bring down the wrongdoers. However, current law enforcement isn’t perfect. These policies are just agreed upon due to lack of a better alternative. Now if we throw AI into the mix of these already cluttered institutions and if something goes wrong, will a human (researcher/lawmaker) take responsibility or will they resort to a perpetual blame game under the guise of faulty algorithms. 

That said, there is also a scepticism within the community regarding the way the research was taken down. In case this whole ordeal sets a new precedent, will there be more petitions to follow? In such a scenario, will some research share the same fate of the wrongfully convicted?

Ethics in AI is a problematic subject and might remain to be so. However, for AI to be accepted widely, it might need the consensus of all parties concerned. The decision-makers should consist of a body of experts who are also the representatives of marginalised communities and those who have been on the wrong end of the deal. Whether this will clear the path for AGI is yet to be determined. Even if we all can consciously agree upon a solution or the lack thereof, it can be a good starting point.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories