MITB Banner

Council Post: STAR Framework for Measuring AI Trust — Safety, Transparency, Accountability and Responsibility

Share

STAR Framework for Measuring AI Trust: Safety, Transparency, Accountability and Responsibility

Illustration by STAR Framework for Measuring AI Trust: Safety, Transparency, Accountability and Responsibility

In 2016, Microsoft released an AI-based conversational Twitter chatbot to interact with users through messages and tweets. Within a short period of operation, the chatbot started replying with offensive and racist messages. This chatbot was trained on a large corpus of public data, which led to a coordinated attack by people who introduced racist bias in the system. This incident was a major eye-opener. 

While caught in the race for introducing the next big AI innovation, the ethics element finds itself on the back burner in most cases. Along with the growing innovation and development in AI, ethical issues are now gaining momentum. Activists, for long, have stressed the importance of building AI models that are unbiased, governable, ethical, and trustworthy.

Trustworthy AI

Before we delve more into what makes an AI system trustworthy, we must ponder what does not. In most layman terms, an untrustworthy AI is simply a model or algorithm that is too dangerous to be implemented in public systems, one that further intensifies existing biases and unethical practices and may introduce newer ones. For example, surveillance-based targeted ads, autonomous lethal drones, and profiling based hiring, among others.

Apart from being biassed, explainability or the lack of it is also a big indicator of how trustworthy an AI model is. AI and machine learning are known to suffer from the black box problem. It means that they are created directly from the data by an algorithm, and humans, even the ones who designed them, cannot understand how that system works or makes decisions. 

Trustworthy AI is based on the idea that trust is the foundation on which successful societies and economies are built. Individuals and societies will be able to fully realise the potential of AI if trust can be established in its development, deployment, and use. In a paper titled ‘Trustworthy Artificial Intelligence’, the authors list five foundational principles of AI trustworthiness – beneficence, non-maleficence, autonomy, justice, and explicability. 

Not mitigating distrust and giving incorrect or false outcomes lowers the potential of AI for industry and society on the whole. Companies and corporations need to ensure that human decision making is strengthened by the AI they are using and not create new problems and challenges. They are responsible for adopting practices that minimise AI bias.

Framework for ensuring Trustworthy AI

Internally, many big companies have been adopting frameworks that help them build ethical and reliable models. Given the impact and influence AI currently has on our daily lives and other critical fields like healthcare, national security, and the economy, governments worldwide are stepping up and developing their own ethical and trustworthy AI framework. Here, a good example is the EU’s General Data Protection Regulation (GDPR), which is often touted as the most comprehensive and strict framework. GDPR introduces a right of explanation for persons that allows them to seek and obtain meaningful information of the logic involved in automated decision-making with legal or similar effects. The technology that is incapable of explaining the logic of its working will either remain a ‘dead letter’.

A framework to ensure trustworthy AI can include different parameters, for this article, I list four major criteria, listed as the STAR framework:

Safety: An AI or machine learning systems’ function is defined to avoid systematic and random failures. The ML model must avoid unintended and harmful behaviour and adhere to best practices. Models are always exposed to attacks from intruders and other adversaries; however, there should be mechanisms to detect and deal with them to keep the users safe from harm. Privacy and security form an important component of a safe AI.

Transparency: As discussed above, transparency is one of the most desired features in an AI system. It is possible through Explainable AI, which has gradually emerged as the one of highly talked about topics in AI. That said, a section has raised concerns about the transparency-efficiency tradeoff; some call it AI’s transparency paradox. The key lies in striking a balance between the two. Notably, the concept of ML observability has been on the rise. It refers to understanding the model’s performance across all stages of the development cycle. This mechanism helps in monitoring and flagging changes in parameters like data distribution and other changes in each feature.

Accountability: Accountable AI entails the practice of developing and deploying AI that empowers employees and businesses, positively impacts society and allows businesses to scale their models confidently. Businesses must implement policies that outline accountability factors. This can be tricky water to tread as there is a dilemma on who would be responsible – the company, the engineer, or the algorithm.

Reliability: In simple terms, a reliable model produces consistent results while minimising errors. However, the reliability of models is not just limited to the output; it also entails the method adopted by the system to arrive at it. The model must be transparent and explainable enough for the user to know and understand any error the system should commit. The company should take a user-centric approach to define the model’s key performance indicators (KPIs).

In conclusion

To improve the trust in your AI systems, it is imperative that businesses apply similar frameworks to assess and rectify shortcomings in the AI systems before putting them to greater use. AI systems should perhaps carry a ‘trust mark’ to indicate the level of trust established using a certain trust framework. It is also important to support research and dissemination of such information to help improve the overall trustworthiness of future AI systems.

This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill the form here.

Share
Picture of Suresh Chintada

Suresh Chintada

Suresh is a senior executive with wide-ranging experience as a director of engineering, head of software operations, head of applied research projects, program and project manager, systems engineer, software product architect in a career spanning over 25 years in High tech Software, Wireless, Mobile, Telecom, Cable and Networking industry.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India