Advertisement

Active Hackathon

Hemant Misra From Swiggy Says Explainable AI May Be Overrated

Hemant Misra Swiggy

According to Hemant Misra, Head of Applied Reasearch at Swiggy, “If human decisions are biased, then why do we care if AI is a little biased in some cases?”

We, for the most part, think that machines are more objective and fair than people. In any case, previously we have witnessed quite a few instances and contentions about AI-powered frameworks yielding one-sided or biased outcomes. For instance, in 2016, it was revealed that the AI systems in the US courts were bound to mark black respondents as “high risk” compared to white defendants from similar backgrounds. 

The fact that the AI algorithm was biased despite the fact that the framework wasn’t unequivocally given any information on litigants’ race may be shocking. It proved that AI was somehow racist based merely on the looks. The big question many think that lies is whether the net impact of ML-empowered frameworks is to make the world more pleasant and progressively effective or to amplify human bias to a bigger level.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Hemant Misra From Swiggy On Why AI Bias Is Natural

Numerous significant choices in our lives are made by frameworks or something to that effect, regardless of whether those frameworks comprise of individuals, machines, or a mix of both. Huge numbers of these current frameworks are biased in both evident and inconspicuous ways. The expanding job of ML in basic leadership frameworks can be controversial, especially if it is about making important decisions. There is another side to the bias story, humans themselves. 

But, some experts may argue that if the world is functioning despite the fact that everyone is biased for something, we can make the AI world also work in the same manner. One such expert is Hemant Misra, Head of Applied Research at the at popular food delivery app Swiggy, and an expert in speech recognition and natural language processing.

“If AI is being used to judge court cases, or when it is used in healthcare in the future, it will be important for it not be biased. But, everybody is biased, then why there is so much problem in machines being biased. And there is a historical reason for any reason based on human interactions and the evolution of our civilisation. When we are biased, the data we are generating will also be biased. Now, on the other hand, we need to ask whether we are we okay as a society if a machine is in fact not biased as it has been part of evolution,” said Misra who leads Applied Research at Swiggy.

Do We Need Explainability In AI?

What fills the AI framework is the data it gained from. Along these lines, it uniquely in contrast to a standard software program where people expressly compose each line of code. The precision of an ML framework could be estimated by people, yet explainability into how such a framework really settles on decisions is restricted. This is the place Explainable Artificial Intelligence could come in. People may have the option to address for any bias before it has a genuine effect, in the event that they could monitor the “thinking” a calculation used to settle on choices about individuals from high-risk category. But, is explainable AI something we need?

According to Hemant Misra- Head of Applied Research- Swiggy, while we need explainability in critical cases, otherwise may not be as relevant. “We know everyone is biased. So, How many of us go to employers and ask them to explain the rejections? How many of us go to the bank in case the bank rejects the application of a credit card, ask them to explain the decision. They have their own methodology. If human decisions are biased, then why do we care if AI is a little biased in some cases. In cases where it is a matter of life and death, of course yes explainability is key just like we want to understand why a doctor may take a particular decision that may impact a person’s life. Apart from that, it is overrated,” opined Misra, speaking at a recent ThoughtWorks Live event in Bengaluru.

More Great AIM Stories

Vishal Chawla
Vishal Chawla is a senior tech journalist at Analytics India Magazine and writes about AI, data analytics, cybersecurity, cloud computing, and blockchain. Vishal also hosts AIM's video podcast called Simulated Reality- featuring tech leaders, AI experts, and innovative startups of India.

Our Upcoming Events

Conference, Virtual
Genpact Analytics Career Day
3rd Sep

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR

A Case for IT Professionals Switching Jobs Frequently

For Indian companies, the ability to retain employees has become a tight ropewalk between transforming their working models and adopting a hybrid working model successfully. Over 60% respondents in the Qualtrics survey said that they would look for a new job, if forced to return to work from office full time.