MITB Banner

Trust Issues: Is AI Black Box Creating A Black Future?

Share

AI Black Box Creating A Black Future

The rise of automation and the emergence of artificial intelligence in the field of decision making has been on the receiving end of severe critics in the last few years, where businesses and people have not been able to fully trust the technology it’s operating on. The key reason behind such mistrust is the AI’s black box problem, which doesn’t allow users to know what’s going on behind the scenes while churning through the gigantic amount of data to solve problems. Researchers believe that “if people don’t know how AI comes up with its decisions, they won’t trust it.”

In fact, the key reason for the failure of IBM’s oncology technology was the lack of trust in it by hundreds of people. Vyacheslav Polonski, the founder of Avantgarde Analytics once said that “The biggest problem with IBM’s oncology technique was doctor’s mistrust in it.” He further added, “AI’s backend process is usually very difficult for people to understand, and the interaction with something one doesn’t understand can cause anxiety and make them feel like losing control.”

Also Read: Making sense of ‘black box’ in AI

AI, Behind The Scenes

It’s always amazing to know that artificial intelligence can productise huge amount of data to solve our daily problems, however, understanding the tech behind is often laborious for people.

Deep learning networks — a set of algorithms — are made of a huge number of connections and behind this technique sits the data acquired and the algorithm required to interpret it. We call it a ‘black box AI’ when the user doesn’t understand the process of churning such data and how the machine has arrived at a certain decision. This includes machine learning along with big data to understand and analyse a user’s features into a class, which can predict the behavioural traits of that individual. This can help in areas like credit risk, health status, etc., without actually revealing the reasons.

Also Read: US Senator drafts bill for regulating AI

AI black box caused problems in sectors like healthcare and military, where AI algorithms are used to to make serious and critical decisions which affect a lot of people. These sectors usually suffer through the fear of the unknown, because of the lack of transparency in these innovative technologies, which can not only create problems in its progress but also subjected to heavy legal ramifications.

AI’s black box not only represents a lack of transparency but also harbours biases, which is handed down to the algorithms from humans. This, in turn, may lead to unfair decisions towards a particular community or a race.

Also Read: MIT’s new open-source tool to see the BTS of black-box modelling

XAI To The Rescue

As opposed to AI’s black box, explainable AI (XAI) is an essential component of human-AI, which aims at expanding the human experience with transparency instead of replacing it. It will almost be impossible to trust an AI algorithm or its tools which are used for critical decision making if the process is opaque and no rationale has been produced for it.

Explainable AI holds two main components of its working:

  • Accountability, where the users are aware of the tech behind and the process of reaching a conclusion. Explainable AI will also be able to trace the path of reasoning
  • Auditability, where the user can review the process used for analysing the data. Explainable AI will provide the ability to test the processes along with refining them for gaping future loopholes

In the current scenario, AI has been used by several businesses to automate their repetitive tasks in order to improve productivity, however, to use the same AI for more crucial tasks without clear accountability of the process will affect the business and its future revenue. As we move forward in the digitisation world, the demand for transparency will grow from the side of enterprise stakeholders, and therefore explainable AI will be the key to building better technologies and informed decisions. 

Apart from the healthcare industry, explainable AI has become a crucial deal for applications in many contexts. According to Jack Dorsey, the CEO of Twitter once mentioned to media that businesses need to do a better job at explaining how their algorithms work. The best way, according to him, will be to open the algorithm up for people to actually see the working behind it. He further explained, “There’s a whole field of research in AI called ‘explainability’ that is trying to understand how to make algorithms explain the process of decision making.”

Along with Twitter, the US Department of Defense (DoD) is also investing heavily in Explainable AI. David Gunning, program manager at the Defense Advanced Research Projects Agency (DARPA) explained the importance of explainable AI/ ML for future warfighters. He said, “Explainable AI can help in understanding, and effectively managing an emerging generation of artificially intelligent machine partners. In fact, new machine-learning systems will also have the ability to explain their rationale, characterise their strengths and weaknesses, and convey an understanding of how they will behave in the future.”

Another instance is the London-based AI research company, DeepMInd, where they use deep learning technique to assign sites of treatment priority looking at patient’s eye scans. According to DeepMind’s researchers, the model provides enough possible explanations of the tool that is being used, explaining the patients about each label. 

AI Conundrum

Currently, unexplained AI is being used in the kitchen starting from finding a recipe to social media, where platforms are using AI to block the trolls, however, it isn’t fair when one cannot explain the algorithm behind the AI used for criminal sentencing on a racial bias — rating blacks higher on crime over white men. Enough transparent system with no black box, one can tell where the algorithm has gone wrong and then can be fixed.

Although AI transparency can help mitigate issues of authenticity, reduce discrimination, and increase trust; it is to be believed that there is a huge downside to it. Businesses believe that if the world could crack the workings of AI, it can be used by any layman, which isn’t favourable for companies, who are using it to make money. Especially big tech giants like Facebook, Amazon, Google as well as Palantir — a big data analytics company, who are using AI’s black box to run their business, will never be comfortable in being transparent about their workings and research to competitors.

And, with being in the middle of AI gold rush, companies believe that real economic growth can only be achieved by the companies who are packaging AI onto anything and everything, making it sellable. Businesses have also managed to use the AI’s black box to establish themselves in the industries like health care, insurance search, and the critical fossil fuel industries. 

These companies are totally threatened by the idea of explainable AI which can regulate the whole process. In short, transparency is expensive and will surely expose some of the unethical ways the world is running. 

On the other hand, AI expert Evert Haasdijk has always been a strong supporter of ‘transparent AI’. In a global summit for AI, Haasdijk explains how transparent/explainable AI aims at enabling humans to understand what’s happening. He said, “Transparent AI isn’t about publishing algorithms online. Businesses usually like to keep their algorithm details confidential, also people using these aren’t aware of how to make sense of it. Therefore, just publishing lines of code isn’t very helpful — the point of transparent AI is that the outcome of an algorithm can be properly explained.”

One major example that made headlines is the Amazon’s AI recruiting tool, which was aimed at analysing 10 years of applications in order to create a high performing system for employees and hiring new ones on the basis of the standards. Later it was revealed that the tool was biased towards male applicants, due to societal influences such as wage gaps and gender bias in technology jobs.

Nevertheless, with AI began to transform businesses, several researchers are making strides to address the opaqueness of such a capable technology. Although, a root fix is miles away, a number of promising plans are emerging, where scientists are exploring ways to decode the algorithm application in hopes of illuminating the decision-making process. Some are also making changes in programming languages or invent new ones to have better control over the training.

Tradeoff

The solution to such an issue is not easy for many, as most of the AI tools we have in the market have underlying opaque neural networks, which are hard to decipher. Users have no other choice than trusting the vendor company and their training process, however, AI experts believe that the real solution for AI’s black box is the shift in focus than approach called white box AI.

This approach works on reliable data training process where analysts and data scientists can explain the decision-making process and can also change it if required. Such an approach goes through rigorous training and testing for ensuring real accuracy and transparency between the two parties.

To buttress this point, Sankar Narayanan, the chief practice officer at Fractal Analytics — an artificial intelligence company, once said in an interview, “AI needs to be traceable and explainable. It should be reliable, unbiased and robust to be able to handle any errors that happen across the AI solution’s lifecycle.”

He further believes that, although AI is meant to mimic humans, the thought process of human are often irrational and unexplainable, which in turn creates a black box.

Another key step to solving some of the AI black box problems is to pre-analyse the data that has been put into the tool and check the algorithm’s output in order to create a better decision making process, and once that’s done, the skilled developers can then modify and manipulate the same to reflect human ethics.

Another concern comes with the tradeoff is the requirement of ‘dumb down’ AI algorithms to make them reachable to the layman. It is to be believed, the more the accuracy of the algorithm, the harder it gets to explain it to people. The concept of omitting biased AI from the world would not only create tensions for business giants but will also put dozens of companies out of business, which in turn will lose a huge amount of money. With extricating black box, it will be difficult for companies to blame their mishaps on algorithms and will push them to actually comply with the questions put to them.

In fact, the industrial law enforcement of the US is still running on the black box and it is unlikely for the government to end this deal anytime soon. So as long as the leaders of the world are still profiting from the biased AI, the existence of the same will always remain in the society.

Outlook

Recently the industries across the world have been making a lot of strides against the biased behaviour of AI and its ethical concerns. AI’s black box not only complicates the process of filtering out the inappropriate content but also creates hassles for developers to analyse the output. With biased data collection and unethical training process, the black box can create risks that can be hard to resolve. This leaves the companies on their own, individually searching for AI’s data collection and deployment ethics and its guidelines.

Share
Picture of Sejuti Das

Sejuti Das

Sejuti currently works as Associate Editor at Analytics India Magazine (AIM). Reach out at sejuti.das@analyticsindiamag.com
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.