Ratan Tata trusts his gut feeling more than the numbers when investing in startups. Tata is not alone, many CEOs go with their gut and make snap judgments, according to a Deloitte study. Even in the most data-driven organisations, you’d be surprised to know that intuitions do a lot of heavy lifting in the decision-making process. There’s a reason why the gut is called the second brain.
Now, in the age of Big Data, the million-dollar question is: Can AI systems replace intuition in high-stakes decision making? The answer is a resounding no, thanks to the black-box nature of AI/ML models.
In the last decade, a good deal of AI/ML startups have cropped up across the world. However, most of them fell by the wayside after the potential buyers refused to be sold on their AI models; business leaders would rather stick with their gut than trust AI models.
XAI marks the spot
Explainable artificial intelligence (XAI) is a method that allows humans to understand the output of machine learning models. XAI is critical in decision making as it helps to build trust in AI models and situates model accuracy, fairness, transparency and outcomes in the right context. Though a lot of companies are coming up with sophisticated AI models, the improvement in performance comes with a downside. More sophistication means more parameters. These complex algorithms parse millions of variables to come to a decision, and it’s humanly impossible to reverse engineer or explain an output on account of the multitudes of variables involved.
Additionally, an AI system is only as good as the data used to train it. Though advances in NLP, computer vision, and deep learning have made generalisation possible to an extent, the models are far from foolproof. Businesses screen for fairness, explainability, robustness, data lineage, and transparency before onboarding AI models. Most AI startups don’t pass muster due to their inability to explain the inner workings of the ML models.
Even though 83.8 % of AI software companies claim to use some form of explainability framework, none of them can guarantee its efficacy in the real world.
Let’s assume we have figured out how to make models more explainable. But how do you define explainability? A metric that works well for a developer may not be suitable for a GDPR compliance manager. The question of who gets to set the benchmark is critical here. Most human thinking and decision making occur unconsciously and don’t conform to a universal standard. So if humans can’t reach a consensus, how do we expect AI to be able to explain itself?
Use cases
Take, for example, an NBFC or a fintech giant that uses an AI model to assess the creditworthiness of loan applicants. The traditional models could explain why an applicant was rejected as there are only a few factors at play. But when you introduce an AI model for credit underwriting, you are in the ‘factor exponential’ territory. AI models look at millions of variables to determine whether an applicant is creditworthy, and the machine’s reasoning for arriving at these decisions is hard to fathom, if not impossible.
In healthcare, XAI could be the matter of life and death. The AI models used for differentials might output a wrong diagnosis based on a misread correlation, overfitting or scores of other reasons.
Most startups offering AI/ML tools for decision making don’t make it because they can’t get a vote of confidence from the decision-makers. Business leaders will not enlist the services of such startups unless they understand how an AI model works. To that end, the AI/ML startups should work on the explainability part or risk perishing. Unless the startups are able to lift the hood and spell out how exactly an AI model works, businesses are going to dismiss them as snake oil.
This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill the form here.