MITB Banner

Hemant Misra From Swiggy Says Explainable AI May Be Overrated

Share

Hemant Misra Swiggy

According to Hemant Misra, Head of Applied Reasearch at Swiggy, “If human decisions are biased, then why do we care if AI is a little biased in some cases?”

We, for the most part, think that machines are more objective and fair than people. In any case, previously we have witnessed quite a few instances and contentions about AI-powered frameworks yielding one-sided or biased outcomes. For instance, in 2016, it was revealed that the AI systems in the US courts were bound to mark black respondents as “high risk” compared to white defendants from similar backgrounds. 

The fact that the AI algorithm was biased despite the fact that the framework wasn’t unequivocally given any information on litigants’ race may be shocking. It proved that AI was somehow racist based merely on the looks. The big question many think that lies is whether the net impact of ML-empowered frameworks is to make the world more pleasant and progressively effective or to amplify human bias to a bigger level.

Hemant Misra From Swiggy On Why AI Bias Is Natural

Numerous significant choices in our lives are made by frameworks or something to that effect, regardless of whether those frameworks comprise of individuals, machines, or a mix of both. Huge numbers of these current frameworks are biased in both evident and inconspicuous ways. The expanding job of ML in basic leadership frameworks can be controversial, especially if it is about making important decisions. There is another side to the bias story, humans themselves. 

But, some experts may argue that if the world is functioning despite the fact that everyone is biased for something, we can make the AI world also work in the same manner. One such expert is Hemant Misra, Head of Applied Research at the at popular food delivery app Swiggy, and an expert in speech recognition and natural language processing.

“If AI is being used to judge court cases, or when it is used in healthcare in the future, it will be important for it not be biased. But, everybody is biased, then why there is so much problem in machines being biased. And there is a historical reason for any reason based on human interactions and the evolution of our civilisation. When we are biased, the data we are generating will also be biased. Now, on the other hand, we need to ask whether we are we okay as a society if a machine is in fact not biased as it has been part of evolution,” said Misra who leads Applied Research at Swiggy.

Do We Need Explainability In AI?

What fills the AI framework is the data it gained from. Along these lines, it uniquely in contrast to a standard software program where people expressly compose each line of code. The precision of an ML framework could be estimated by people, yet explainability into how such a framework really settles on decisions is restricted. This is the place Explainable Artificial Intelligence could come in. People may have the option to address for any bias before it has a genuine effect, in the event that they could monitor the “thinking” a calculation used to settle on choices about individuals from high-risk category. But, is explainable AI something we need?

According to Hemant Misra- Head of Applied Research- Swiggy, while we need explainability in critical cases, otherwise may not be as relevant. “We know everyone is biased. So, How many of us go to employers and ask them to explain the rejections? How many of us go to the bank in case the bank rejects the application of a credit card, ask them to explain the decision. They have their own methodology. If human decisions are biased, then why do we care if AI is a little biased in some cases. In cases where it is a matter of life and death, of course yes explainability is key just like we want to understand why a doctor may take a particular decision that may impact a person’s life. Apart from that, it is overrated,” opined Misra, speaking at a recent ThoughtWorks Live event in Bengaluru.

Share
Picture of Vishal Chawla

Vishal Chawla

Vishal Chawla is a senior tech journalist at Analytics India Magazine and writes about AI, data analytics, cybersecurity, cloud computing, and blockchain. Vishal also hosts AIM's video podcast called Simulated Reality- featuring tech leaders, AI experts, and innovative startups of India.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.