The Scope & Future of Responsible AI

Widely referred to as Responsible AI, the concept has been around for the longest time but has recently become a mainstream conversation point.
The Scope & Future of Responsible AI

“With great power comes great responsibility,” a quote highly popularised in Spiderman, has been used again and again for people holding positions of power and authority. However, it seems that time has come to expand the scope of this quote. Given how artificial intelligence systems influence our daily lives and ultimately shape an entire society, it is imperative to ensure accountability and responsibility from such systems.

Widely referred to as Responsible AI, the concept has been around for the longest time. However, it has become a mainstream conversation point very recently. With companies and governments globally taking such strong cognizance of the situation, it is important for all to understand its nitty-gritty before going all out on this. In this regard, Tredence conducted a Fireside Chat session on ”Responsible AI: Decode, Contextualise and Operationalise”. This session hosted dignitaries like Professor Balaraman Ravindran, Head, Robert Bosch Centre for Data Science and AI, Professor at IIT Madras; Soumendra Mohanty, Chief Strategy Officer & Chief Innovation Officer at Tredence Inc.; and Dr Aravind Chandramouli, Head of AI CoE at Tredence Inc. 

Responsible AI and its Scope

A challenge becomes much easier to tackle when one can define it clearly. The same holds true for Responsible AI. So, very early in the session, Prof. Ravindran, Mohanty, and Dr Chandramouli discussed what constitutes responsible AI’s scope. 

According to Dr Chandramouli, Responsible AI can have a bunch of definitive characteristics, the most important five being explainability, unbiased, reproducible, justifiable, and monitorable. To this, Prof. Ravindran added that other significant factors are privacy and accountability.

A Gartner report corroborates the definition put across by the dignitaries. As per the report, Responsible AI encompasses several aspects of making the right decisions when adopting AI. These aspects are often addressed independently by organisations.

Like most innovations, Responsible AI finds roots in academia. Elaborating more on this, Prof. Ravindran said, “Surprisingly, we don’t have tech pieces to roll out Responsible AI. Do we have robust explainability? Yes, we have, but only for simple linear models. We don’t have anything for complex models. The industry is making progress, but the system still cannot find the source of bias. The use of facial recognition tech is being deprecated in the west. In the academic domain, people have started methods to build systems. Even when you remove sensitive attributes, how can you ensure that the system is not latching on to these sensitive attributes? Theoretically, a few solutions have been devised. But there is a gap in practical application.”

“Responsible AI is not just technocrats; you need social scientists, ethnographers, people with legal expertise to come up and talk about it,” he further added. 

Responsible AI Framework

With the growing concern and awareness about Responsible AI, the need for standardisation has also grown manifold. According to Dr Chandramouli, the time is also perfect for introducing such standardisation as Responsible AI is still in the nascent stage. Such guidelines also give a reference framework for the future. Currently, different organisations are now developing their own guiding principles. However, the good news is that these guidelines are largely overlapping.

As per Mohanty, the role of Responsible AI would become even more significant as we move to complex and mission-critical scenarios. In such a case, an ideal Responsible AI framework is grounded firmly in practical use cases. 

The current state of Responsible AI in the country seems negligible or abysmal, at least what it seems at the surface. However, Dr. Chandramouli contested that, on the contrary, slowly but gradually, companies are working at improving that. “If we break down the figure, we find that metrics like explainability and monitoring are far ahead than other metrics. A McKinsey report showed that many companies are giving preference to Responsible AI now. The fact that we have such discussion shows that the industry is open to Responsible AI,” he added.

What Lies Ahead

All three dignitaries unanimously agreed that there is a long way to go regarding the complete and effective adoption of Responsible AI. As Mohanty mentioned, AI can no longer be a black box. There is a dire need to make it explainable, humanise it, make it interactive, inclusive and trustworthy.

Dr Chandramouli concluded by saying that AI products and systems will be given a ”responsibility score” that will determine how well they are aligned with Responsible AI principles in the next decade. So, the future surely sounds promising.

More Great AIM Stories

Shraddha Goled
I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.

More Stories

OUR UPCOMING EVENTS

8th April | In-person Conference | Hotel Radisson Blue, Bangalore

Organized by Analytics India Magazine

View Event >>

30th Apr | Virtual conference

Organized by Analytics India Magazine

View Event >>

MORE FROM AIM
Yugesh Verma
All you need to know about Graph Embeddings

Embeddings can be the subgroups of a group, similarly, in graph theory embedding of a graph can be considered as a representation of a graph on a surface, where points of that surface are made up of vertices and arcs are made up of edges

Yugesh Verma
A beginner’s guide to Spatio-Temporal graph neural networks

Spatio-temporal graphs are made of static structures and time-varying features, and such information in a graph requires a neural network that can deal with time-varying features of the graph. Neural networks which are developed to deal with time-varying features of the graph can be considered as Spatio-temporal graph neural networks. 

Yugesh Verma
A guide to explainable named entity recognition

Named entity recognition (NER) is difficult to understand how the process of NER worked in the background or how the process is behaving with the data, it needs more explainability. we can make it more explainable.

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM