Council Post: Does a lack of XAI put AI startups at risk of failing?

Instinct is a marvellous thing. It can neither be explained nor ignored ― Agatha Christie.
Council Post: Does a lack of XAI put AI startups at risk of failing?

Ratan Tata trusts his gut feeling more than the numbers when investing in startups. Tata is not alone, many CEOs go with their gut and make snap judgments, according to a Deloitte study. Even in the most data-driven organisations, you’d be surprised to know that intuitions do a lot of heavy lifting in the decision-making process. There’s a reason why the gut is called the second brain.

Now, in the age of Big Data, the million-dollar question is: Can AI systems replace intuition in high-stakes decision making? The answer is a resounding no, thanks to the black-box nature of AI/ML models.

In the last decade, a good deal of AI/ML startups have cropped up across the world. However, most of them fell by the wayside after the potential buyers refused to be sold on their AI models; business leaders would rather stick with their gut than trust AI models.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

XAI marks the spot

Explainable artificial intelligence (XAI) is a method that allows humans to understand the output of machine learning models. XAI is critical in decision making as it helps to build trust in AI models and situates model accuracy, fairness, transparency and outcomes in the right context. Though a lot of companies are coming up with sophisticated AI models, the improvement in performance comes with a downside. More sophistication means more parameters. These complex algorithms parse millions of variables to come to a decision, and it’s humanly impossible to reverse engineer or explain an output on account of the multitudes of variables involved.

Additionally, an AI system is only as good as the data used to train it. Though advances in NLP, computer vision, and deep learning have made generalisation possible to an extent, the models are far from foolproof. Businesses screen for fairness, explainability, robustness, data lineage, and transparency before onboarding AI models. Most AI startups don’t pass muster due to their inability to explain the inner workings of the ML models.

Even though 83.8 % of AI software companies claim to use some form of explainability framework, none of them can guarantee its efficacy in the real world. 

Let’s assume we have figured out how to make models more explainable. But how do you define explainability? A metric that works well for a developer may not be suitable for a GDPR compliance manager. The question of who gets to set the benchmark is critical here. Most human thinking and decision making occur unconsciously and don’t conform to a universal standard. So if humans can’t reach a consensus, how do we expect AI to be able to explain itself?

Use cases

Take, for example, an NBFC or a fintech giant that uses an AI model to assess the creditworthiness of loan applicants. The traditional models could explain why an applicant was rejected as there are only a few factors at play. But when you introduce an AI model for credit underwriting, you are in the ‘factor exponential’ territory. AI models look at millions of variables to determine whether an applicant is creditworthy, and the machine’s reasoning for arriving at these decisions is hard to fathom, if not impossible.

In healthcare, XAI could be the matter of life and death. The AI models used for differentials might output a wrong diagnosis based on a misread correlation, overfitting or scores of other reasons.

Most startups offering AI/ML tools for decision making don’t make it because they can’t get a vote of confidence from the decision-makers. Business leaders will not enlist the services of such startups unless they understand how an AI model works. To that end, the AI/ML startups should work on the explainability part or risk perishing. Unless the startups are able to lift the hood and spell out how exactly an AI model works, businesses are going to dismiss them as snake oil.

This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill the form here.

More Great AIM Stories

Anirban Nandi
With close to 15 years of professional experience, Anirban specialises in Data Sciences, Business Analytics, and Data Engineering, spanning various verticals of online and offline Retail and building analytics teams from the ground up. Following his Masters from JNU in Economics, Anirban started his career at Target and spent more than eight years developing in-house products like Customer Personalisation, Recommendation Systems, and Search Engine Classifiers. Post Target, Anirban became one of the founding members at Data Labs (Landmark Group) and spent more than 4.5 years building the onshore and offshore team of ~100 members working on Assortment, Inventory, Pricing, Marketing, eCommerce and Customer analytics solutions.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

Conference, in-person (Bangalore)
Cypher 2023
20-22nd Sep, 2023

3 Ways to Join our Community

Whatsapp group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our newsletter

Get the latest updates from AIM