The role of machine learning and its forecast of societal impacts have been oscillating between future haven of benevolent possibilities and exaggerated dystopia. The ethics of AI usage, the rationale behind research and its explainability have been one of the most spoken topics around AI these days.
In an act that could cement the notion of unethical use of AI, Joe Redmon, creator of the popular YOLO computer vision algorithm has said in his recent Twitter post that he has quit research as he was concerned about the direction in which it was going.
What This All Means
In his 2018 paper titled, YOLOv3, Redmon wrote about the implications of having a classifier such as the YOLO. “If humans have a hard time telling the difference, how much does it matter? ” wrote Redmon
But maybe a better question is, Redmon, quipped,
“What are we going to do with these detectors now that we have them?”
In his paper, which is a part satire part research ode that reeks of his distaste for potential misuse of research, Redmon took jabs at Google and Facebook and even the organisation that funds his research.
On a more serious note, he also insisted on the responsibility of the computer vision researchers to consider the harm our work might be doing and think of ways to mitigate it.
We owe the world that much
This whole debate can lead one to a few questions, which might go unanswered forever:
- Should the researchers have a multidisciplinary, broader view of the implications of their work?
- Should every research be regulated in its initial stages to thwart malicious societal impacts?
- Who gets to regulate the research?
- Shouldn’t the expert create more awareness rather than just quit?
- Who should pay the price; the innovator or those who apply?
Risk Of Being Aloof
One big complaint that people have against Redmon’s decision is that experts shouldn’t quit. Instead, they should take the responsibility of creating awareness about the pitfalls of AI.
Kevin Zakka, a Google intern, responded to Redmon’s tweet by saying that rather than abandoning his research out of fear of potential misuse, Redmon might have used his respected position in the CV community to raise awareness.
Zakka’s response though resonates with most people, Redmon’s tweet, on the contrary, has created more awareness with his ‘i quit’ tweet than he would have done by explaining the ill effects of AI usage.
The impetus behind this whole ordeal was provided by the NeurIPS (a prestigious conference) decision to have the researchers include the role of research and societal impacts during submissions. This itself stirred debate amongst practitioners about the uncertainty of research and how a researcher would have a broader or to say the least, a futuristic perspective on how certain research would impact the populace.
Oh. Well, the other people heavily funding vision research are the military and they’ve never done anything horrible like killing lots of people with new technology oh wait.
Joe Redmon, in his YOLO v3 paper
Redmon’s objection with inappropriate usage of his work might alarm policymakers into beckoning a new form of research regulations that can further snowball into an undesirable AI winter. However, the ethical dilemmas surrounding research are not new.
This date back to the discovery of the atomic bomb when those who were responsible for the research such as Robert Oppenheimer were documented regretfully quoting something as eerie as “I am become death, the destroyer of the worlds.” If one group argued against the bomb, the others saw the bright side of having ended the war.
So, as the dust settles down and topics such as fairness, reliability and accountability around AI get more attention, one way or the other, the lawmakers and researchers will be forced to find a sweet spot between unbridled innovation and haphazard regulations.