A new report released by AIMResearch analyses the current state of Responsible AI among Indian enterprises. The report, titled “State of Responsible AI in India” highlights the efforts made by organisations in India to ensure AI’s safe and responsible development and draws attention to areas that need improvement.
The report is based on a survey of Indian AI firms conducted by AIM Research in May 2021. The study provides a complete overview of where Indian enterprises stand in adhering to AI principles such as fairness, transparency, accountability, explainability, human control, etc.
This report will help enterprises and the overall Indian AI industry better their policymaking to ensure the safe and responsible development of AI.
Access the complete report here.
The rapid advances in AI and its potential to make decisions and automate tasks significantly impact individual autonomy and change societies’ functions. Thus, it has become essential that we assess how AI is built and ensure that it is deployed in a manner that does not negatively impact individuals or societies.
According to the report, Indian enterprises developing AI are making a considerable effort in adopting guidelines, frameworks, and other best practices but lack behind in conducting third-party audits or impact assessments of their AI systems.
Two-thirds of the Indian AI firms or 66.7% have adopted a formal risk evaluation or auditing framework, but only 6.9% have hired external auditors.
The report provides a thorough analysis of these AI enterprises across different parameters like the size of the data science unit, type of the firm, and headquarter location.
The reports’ findings show that larger firms perform comparatively better when documenting risk evaluation guidelines, adopting bias detection frameworks, ensuring safety, and hiring third-party auditors. However, they fall behind in conducting periodic human-rights impact assessments of their AI systems.
Around seven in eight or 87.5% of the firms with large data science units have documented safety standards checklists for their AI systems compared to 75.0% and 60.0% of the medium-sized or small data science units. At the same time, large data science units at 37.5% are around twice as likely to not perform long-term impact assessment on any of their AI systems compared to medium-sized data units at 16.7% and small data science units at 20.0%.
Boutique AI firms that provide niche AI products or services do better in most principles of Responsible AI than big IT firms that provide AI-as-a-service. They are more likely to have audit guidelines, standards checklist for algorithmic fairness, and bias detection frameworks. They are also more likely to have a transparent AI system, perform periodic human rights impact assessment, and allow more human control.
More than nine in ten or 92.3% of the boutique AI firms use an explainability framework to better understand their AI systems than the 75.0% of the IT firms providing AI-as-a-service. Also, around 15.4% of the Boutique AI firms conduct a human rights impact assessment on every AI system when only 6.7% of the firms providing AI-as-a-service do it.
Lastly, firms with headquarters outside India have a higher chance of maintaining compliance standards in adopting bias detection frameworks. They are also more likely to consult with an AI Ethicist and develop their systems through multi-stakeholder collaborations.
AI firms with headquarters outside India at 35.7% have a higher chance of adopting risk evaluation or bias detection frameworks than firms with headquarters in India at 20.0%. Also, around 86.7% of the firms with headquarters outside India consult with stakeholders for every AI system they develop compared to 71.4% of the AI firms with headquarters in India.
The report provides such detailed insights through a comprehensive analysis of the survey. It also provides recommendations that could improve the state of Responsible AI among Indian enterprises.
Access the complete 50-page report here.