Now Reading
Has AI Storytelling Become Myopic? Where Does Researchers’ Responsibility Lie

Has AI Storytelling Become Myopic? Where Does Researchers’ Responsibility Lie

Abhijeet Katte

A researcher recently laid out a controversial proposal to add to a round of peer reviews for journals and conferences that would look at the societal consequences of any computer science research. In an interview published in Nature, Brent Hecht, who is an assistant professor at Northwestern University, director of the People, Space, and Algorithms Research Group, and the chair of the ACM Future of Computing Academy, said that the “peer reviewers must ensure that researchers consider negative societal consequences of their work.” He is also of the strong opinion that the review process for any research should have the researcher to assess how the technology can be used in the future. If the researcher does not perform such an analysis then the journal should reject the paper.

In March of 2018, Hecht wrote a proposal titled It’s Time to Do Something: Mitigating the Negative Impacts of Computing Through a Change to the Peer Review Process where he said that the current research community only thinks of the benefits a research paper can have no impact on the society. He said, “Rose-colored glasses are the normal lenses through which we tend to view our work”. Hecht goes on to say that the media stories exclusively cover the good aspects of the technology, and does not bother with the other angles. He strongly believes that the negative side effects of current computer science research are “increasingly high-profile, pervasive, damaging” and grossly under-reported.

Intellectual Lapse

The gap between the storytelling of benefits and unwanted side-effects of an evolving computer science and AI research is huge. For a society to only focus only on the benefits of a technology and not focus on the misuse represents a serious intellectual lapse. Researchers like Hecht and others feel that even the gap mentioned previously is under-reported and is analogous to a medical community only talking about the benefits of a new drug or treatment and totally ignoring or under-reporting the harmful side effects even they are fatal.

Taking this into account, the ACM (Association Of Computing Machinery) has made changes to the “ACM Code of Ethics and Professional Conduct” which was written in 1992. The code says that the researchers should make it an aim to contribute to society and to human well-being and acknowledging the fact that all humans are stakeholders in computing. The code specifically mentions that the researchers should avoid harm which means avoiding “negative consequences, especially when those consequences are significant and unjust.”

In an interview with Nature, Hecht said, “The idea is not to try to predict the future, but, on the basis of the literature, to identify the expected side effects or unintended uses of this technology. It doesn’t sound that big, but because peer reviewers are the gatekeepers to all scholarly computer-science research, we’re talking about how gatekeepers open the gate.”

A New Approach To Peer Review

Researchers have started to agree that the computer science researchers should now start to aggressively target the downsides of new algorithms and technologies. The idea that a committee of ACM came up after many sittings is to leverage the gatekeeping functionality of the peer review process. The researchers believe that the small change of making researchers think of negative downsides during the submission to a journal or conference.

The change suggested by the committee broadly states:

Peer reviewers should require that papers and proposals rigorously consider all reasonable broader impacts, both positive and negative. The committee also says that the mention of potential positive and negative impacts of research will be great and this practice will go a long distance to build a better society.

See Also

The researchers are requesting other reviewers to take up the suggestion immediately because the recommendations are easy to take up. The committee says that reviewers can apply the recommendation it to the next paper that appears on your review stack and citing this post as justification. The announcement made by the committee said, “If widely adopted, we believe that this recommendation can make meaningful progress towards a style of computing innovation that is a more unambiguously positive force in the world.”

Yuval Noah Harari, a historian and philosopher says, “Computer scientists are developing artificial intelligence (AI) algorithms that can learn, analyse massive amounts of data and recognise patterns with superhuman efficiency. These algorithms could eventually push hundreds of millions out of the job market.” As a suggestion to the problem he says, “Governments might decide to deliberately slow down the pace of automation, to lessen the resulting shocks and allow time for readjustments.”

Many experts also feel that instead of giving so much power to any authority it would be simply prudent if we could get our gatekeepers the reviewers of highly placed science and technology journals and conferences the power to reject research that will be negative for the society. More than that it puts the onus on researchers to properly evaluate the work they are doing and also directs today’s media agencies and AI enthusiasts to showcase the full picture of all technology developments.

What Do You Think?

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
What's Your Reaction?
In Love
Not Sure

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top