So, what really is a trustworthy AI? Padmashree Shagrithaya, the global head of AI Analytics and Data Science at Capgemini, shed light on this hot topic at the Rising 2021 event.
“For a consumer, it is whether the AI is trustworthy, ethical and whether or not it will forsake privacy. For business, the definition is very different– is the system really doing what it is supposed to be doing and whether it is fair or not. In the end, for regulators, it means whether the AI system is benefiting humanity or causing more unfairness,” said Shagrithaya.
“In the end, to have a successful AI project also means that it is trustworthy. For an AI project to be trustworthy, defining what people trust such a system to do is important. Definition and scope are a part of the whole thing; the other part is how do we prove a system is trustworthy. These are the two paradigms that one needs to really think about,” she added.
Taking an example of a fraud detection system in a banking setup, Shagrithaya said first the scope of the model must be clearly defined. In addition, both the accountable owner (a responsible human at the centre of the system) and fraudster must be defined and outline exactly what a fraud situation would mean. Once the scope and roles are defined, the second step would be to ensure the system works to achieve the objective. A model to achieve this should be based on proper research that leaves no loophole for the fraudster to break in.
“From a business point of view, it is important that AI does what is supposed to do within the constraints set by the business, positively. It should base decisions on reality. All these factors associated with trustworthy AI can be looked at from three lenses– business, regulatory, and ethical,” said Shagrithaya.
Reiterating the role of the human in control as the central piece in the trustworthy AI model, Shagrithaya defined this person as the one who is accountable to the company and who ensures the AI system does what it was designed to do. Further, this role demands clarity in terms of the delivery of a particular task. Also, AI must only perform the work it is asked to; at any given time the human in control must be able to override it.
Each stage, from inception to decommissioning of an AI system, must be open to humans. It is also important to have an explainable AI system in place. Other factors include being ethical, transparent, and fair, said Shagrithaya.
Positive intent while building an AI system is very important. It should allow an accountable person to build readable/understandable algorithms, which define what scenario is good and what is bad.
Speaking of algorithm development, the data should have a clearly defined lineage. It should also be devoid of any biases and should ensure privacy, especially in critical industries such as medicine and finance. “Federated AI is a fast-growing trend where the privacy of the user is safeguarded,” said Shagrithaya.
Large AI and machine learning systems have a big carbon footprint. This is detrimental to the planet as a whole. One of the main goals of a company, hence, should also be to build green AI.