MITB Banner

YOLO Creator Quits AI Research Citing Ethical Concerns

Share

The role of machine learning and its forecast of societal impacts have been oscillating between future haven of benevolent possibilities and exaggerated dystopia. The ethics of AI usage, the rationale behind research and its explainability have been one of the most spoken topics around AI these days.

In an act that could cement the notion of unethical use of AI, Joe Redmon, creator of the popular YOLO computer vision algorithm has said in his recent Twitter post that he has quit research as he was concerned about the direction in which it was going.

What This All Means

via Joe Redmon

In his 2018 paper titled, YOLOv3, Redmon wrote about the implications of having a classifier such as the YOLO. “If humans have a hard time telling the difference, how much does it matter? ” wrote Redmon

But maybe a better question is, Redmon, quipped,

“What are we going to do with these detectors now that we have them?”

In his paper, which is a part satire part research ode that reeks of his distaste for potential misuse of research, Redmon took jabs at Google and Facebook and even the organisation that funds his research.

On a more serious note, he also insisted on the responsibility of the computer vision researchers to consider the harm our work might be doing and think of ways to mitigate it. 

We owe the world that much

This whole debate can lead one to a few questions, which might go unanswered forever:

  • Should the researchers have a multidisciplinary, broader view of the implications of their work?
  • Should every research be regulated in its initial stages to thwart malicious societal impacts?
  • Who gets to regulate the research?
  • Shouldn’t the expert create more awareness rather than just quit?
  • Who should pay the price; the innovator or those who apply?

Risk Of Being Aloof

One big complaint that people have against Redmon’s decision is that experts shouldn’t quit. Instead, they should take the responsibility of creating awareness about the pitfalls of AI. 

Kevin Zakka, a Google intern, responded to Redmon’s tweet by saying that rather than abandoning his research out of fear of potential misuse, Redmon might have used his respected position in the CV community to raise awareness.

Zakka’s response though resonates with most people, Redmon’s tweet, on the contrary, has created more awareness with his ‘i quit’ tweet than he would have done by explaining the ill effects of AI usage.

The impetus behind this whole ordeal was provided by the NeurIPS (a prestigious conference) decision to have the researchers include the role of research and societal impacts during submissions. This itself stirred debate amongst practitioners about the uncertainty of research and how a researcher would have a broader or to say the least, a futuristic perspective on how certain research would impact the populace.

Oh. Well, the other people heavily funding vision research are the military and they’ve never done anything horrible like killing lots of people with new technology oh wait.

Joe Redmon, in his YOLO v3 paper

Redmon’s objection with inappropriate usage of his work might alarm policymakers into beckoning a new form of research regulations that can further snowball into an undesirable AI winter. However, the ethical dilemmas surrounding research are not new. 

This date back to the discovery of the atomic bomb when those who were responsible for the research such as Robert Oppenheimer were documented regretfully quoting something as eerie as “I am become death, the destroyer of the worlds.” If one group argued against the bomb, the others saw the bright side of having ended the war. 

So, as the dust settles down and topics such as fairness, reliability and accountability around AI get more attention, one way or the other, the lawmakers and researchers will be forced to find a sweet spot between unbridled innovation and haphazard regulations.

PS: The story was written using a keyboard.
Share
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India