AIMResearch Releases Report On Responsible AI Adoption In Indian Enterprises

AIMResearch Releases Report On Responsible AI Adoption In Indian Enterprises

A new report released by AIMResearch analyses the current state of Responsible AI among Indian enterprises. The report, titled “State of Responsible AI in India” highlights the efforts made by organisations in India to ensure AI’s safe and responsible development and draws attention to areas that need improvement.

The report is based on a survey of Indian AI firms conducted by AIM Research in May 2021. The study provides a complete overview of where Indian enterprises stand in adhering to AI principles such as fairness, transparency, accountability, explainability, human control, etc.

This report will help enterprises and the overall Indian AI industry better their policymaking to ensure the safe and responsible development of AI. 

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Access the complete report here.

The rapid advances in AI and its potential to make decisions and automate tasks significantly impact individual autonomy and change societies’ functions. Thus, it has become essential that we assess how AI is built and ensure that it is deployed in a manner that does not negatively impact individuals or societies. 

Download our Mobile App

According to the report, Indian enterprises developing AI are making a considerable effort in adopting guidelines, frameworks, and other best practices but lack behind in conducting third-party audits or impact assessments of their AI systems.

Two-thirds of the Indian AI firms or 66.7% have adopted a formal risk evaluation or auditing framework, but only 6.9% have hired external auditors.

The report provides a thorough analysis of these AI enterprises across different parameters like the size of the data science unit, type of the firm, and headquarter location. 

The reports’ findings show that larger firms perform comparatively better when documenting risk evaluation guidelines, adopting bias detection frameworks, ensuring safety, and hiring third-party auditors. However, they fall behind in conducting periodic human-rights impact assessments of their AI systems.

Around seven in eight or 87.5% of the firms with large data science units have documented safety standards checklists for their AI systems compared to 75.0% and 60.0% of the medium-sized or small data science units. At the same time, large data science units at 37.5% are around twice as likely to not perform long-term impact assessment on any of their AI systems compared to medium-sized data units at 16.7% and small data science units at 20.0%.

Boutique AI firms that provide niche AI products or services do better in most principles of Responsible AI than big IT firms that provide AI-as-a-service. They are more likely to have audit guidelines, standards checklist for algorithmic fairness, and bias detection frameworks. They are also more likely to have a transparent AI system, perform periodic human rights impact assessment, and allow more human control.

More than nine in ten or 92.3% of the boutique AI firms use an explainability framework to better understand their AI systems than the 75.0% of the IT firms providing AI-as-a-service. Also, around 15.4% of the Boutique AI firms conduct a human rights impact assessment on every AI system when only 6.7% of the firms providing AI-as-a-service do it.

Lastly, firms with headquarters outside India have a higher chance of maintaining compliance standards in adopting bias detection frameworks. They are also more likely to consult with an AI Ethicist and develop their systems through multi-stakeholder collaborations.

AI firms with headquarters outside India at 35.7% have a higher chance of adopting risk evaluation or bias detection frameworks than firms with headquarters in India at 20.0%. Also, around 86.7% of the firms with headquarters outside India consult with stakeholders for every AI system they develop compared to 71.4% of the AI firms with headquarters in India. 

The report provides such detailed insights through a comprehensive analysis of the survey. It also provides recommendations that could improve the state of Responsible AI among Indian enterprises.

Access the complete 50-page report here.

Sign up for The Deep Learning Podcast

by Vijayalakshmi Anandan

The Deep Learning Curve is a technology-based podcast hosted by Vijayalakshmi Anandan - Video Presenter and Podcaster at Analytics India Magazine. This podcast is the narrator's journey of curiosity and discovery in the world of technology.

Kashyap Raibagi
Kashyap currently works as a Tech Journalist at Analytics India Magazine (AIM). Reach out at

Our Upcoming Events

27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023

21 Jul, 2023 | New York
MachineCon USA 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

The Great Indian IT Reshuffling

While both the top guns of TCS and Tech Mahindra are reflecting rather positive signs to the media, the reason behind the resignations is far more grave.

OpenAI, a Data Scavenging Company for Microsoft

While it might be true that the investment was for furthering AI research, this partnership is also providing Microsoft with one of the greatest assets of this digital age, data​​, and—perhaps to make it worse—that data might be yours.