According to the Global AI Adoption Index 2021, 91 percent of companies using AI think it’s important to understand how the models arrived at a decision. Additionally, more than half of the businesses pointed out stumbling blocks that gets in the way of embedding ethical AI in their processes, including lack of skills, inflexible governance tools and biased data.
“Trust in technology does not often emerge on its own; it must be cultivated, We are in a rare moment in technology innovation where we can think critically about ethics before issues emerge,” said Beena Ammanath, Global Head of Deloitte AI Institute, Tech & AI Ethics Lead, Deloitte.
Sign up for your weekly dose of what's up in emerging technology.
We have sounded out the industry leaders to understand the critical importance of Ethical AI in modern businesses.
Abhijit Shanbhag, president and CEO, Graymatics
While there is a lot of buzz around technological advancement, we see reluctance in the adoption of AI, primarily due to common misconceptions around harnessing the full potential AI offers. For instance, there are concerns about breaches of privacy and cyber threats. Further, there are challenges related to the transparency and accountability of the AI algorithms deployed. Factors such as data bias, data privacy, safety and security around AI implementations have also caused a lot of concern.
Human intervention in AI governance must be supported by policies, self-adhering guidelines, certifications, and rating mechanisms to manage sensitivity issues.
1. Principle of safety and reliability: AI-enabled systems should operate as intended and no Individuals, groups or communities should be harmed as a result of decisions the system makes either directly or indirectly..
2. Principle of equality: All stakeholders should be treated equally and the benefits of AI enable systems should be made available equally to all, unless there is a reasonable basis for differential treatment.
3. Principle of inclusivity and non-discrimination: Benefits of AI systems should be made available to all, and no segment of individuals and communities should be denied benefits or overlooked due to any design constraints created in the system.
4. Principle of privacy and security: Adequate safeguards must exist to maintain privacy and security of the data used and stored by an AI system.
5. Principle of transparency: The process and the output of an AI system should be transparent and explanable.
6. Principle of accountability: Mechanisms should be developed to impute liability to different participants and grievance redressal framework should be accessible by the users.
7. Principle of protection and reinforcement of positive human values: AI systems should promote positive human value and should not disrupt the social fabric of the community.
Vicky Jain, founder, uKnowva
One of the biggest drawbacks of AI algorithms is that they are like a blackbox i.e. you will never know the exact logic behind the output, and if your input dataset is biased, the algorithm will also be biased. A good AI can do wonders, but a biased one spells disaster. I believe it can be achieved by establishing an AI law similar to today’s privacy law to ensure people have the confidence that technology is making ethically correct decisions.
We also bring different ideas and experiences together to interact with the model in various ways. Other than that, we try to anticipate how people unlike us will interact with the technology and what issues might arise in their doing so. We systematically feed ethical principles into our platform through periodic code and data reviews.
Ajay Agrawal, senior VP & head of CoE – AI/Analytics, Happiest Minds Technologies
Enabling trust with AI is the need of the hour. With a focus on privacy and compliance, privacy-related and PII information is critical. AI engines need to ensure that features like race, gender, ethnicity, and religion are not used for decision-making. For ensuring AI at scale, we must focus on people, processes, tools, and data. As the awareness and compliance needs are growing, complying with regulations like the European Union regulation for AI GDPR standards can help in setting a roadmap for AI governance.
We have mandated the best practices and policies with regards to AI governance. The team has been trained for AI governance practices that align with most of the compliance needs and best practices like understanding European union AI regulations. Processes and frameworks are in place to monitor the ML development cycle at every stage.
Mayank Singh, co- founder & CEO at Campus 365
While several companies are now using AI on a trial basis to test out how their services are compatible with the wide scope of functions, Campus 365 is using it for simpler tasks and simpler mechanisms.
Though our mood scale and progress tracking system are geared towards children, results will only be accessed by people who are already 18. When we were building the AI, we were concerned about the biases that might slip into the code. Biases are common when building platforms and systems, and while we cannot eliminate them, we can make sure that it is as free of outside influence as possible. LMS systems and eLearning platforms work hand-in-hand to deploy effective learning strategies, while the teams and administration work to reduce the probability of biases in whatever is presented to our learners.
Arvind Nahata, co-founder, Decimal Technologies
A wide range of AI technologies is being used to digitise the entire loan journey. Image processing and deep learning algorithms like the FAST RCNN, and OCR are used to extract and parse relevant information from the financial documents submitted by the borrowers.
Using NLP techniques like Named Entity Recognition and various other text mining techniques, fields like Name, PAN number, AADHAR number etc. are automatically identified and extracted to be filled up automatically wherever required in the loan application journey
Credit risk assessment and recommendations are driven using borrowers’ financial data. Features are engineered using domain-specific knowledge which is then fed into Saarathi’s AI engine, which is based on a combination of various algorithms like Random Forests, Logistic Regression, and other semi-supervised learning methods like the Label Propagation Algorithm.
During the underwriting process, human biases sometimes led to creditworthy borrowers not being able to get loans easily. With the introduction of AI in digital lending, this bias is expected to decrease. However, bias can also be coded into AI algorithms if the training data emerges from existing biased datasets and processes, making human bias a possibility even in a completely digitised loan application process. In such a scenario, it is important to recognise the bias beforehand and ensure fairness is implemented into the digital AI-led processes. Technological solutions that help identify and eliminate bias should be embraced by financial institutions at scale for equitable credit disbursal and to close the credit gap.
All the sensitive borrower information is masked on our platform to prevent possible misuse. We ensure our AI-based methods mimic the intelligence of the domain experts in the lending ecosystem. We have taken the help of experts for data labelling and tagging for credit risk and recommendation engine.
Amitt Sharma, founder and CEO, VDO.AI
Though AI’s applications outnumber its obstacles, one of the most significant roadblocks it currently confronts is algorithmic bias. Because algorithms are developed by people, they are susceptible to basic human assumptions. As a result, the industry’s current problem is to clear its AI systems of biases.
Algorithmic trust and digital ethics should form the fundamental components of any AI effort. VDO.AI understands AI ethics and leverages powerful, predictive ethical AI technology to move beyond the CPM and deliver actual outcomes to our clients’ ad campaigns.
To generate meaningful business outcomes across the funnel, our platform blends high-quality inventory with smart data utilisation, action-driven creatives, and powerful AI.
The human intervention assures the clients of manual fact-checking to eliminate algorithmic biases. Furthermore, we ensure maximising the RoI for our publishers, instead of a singular platform taking the lion’s share from both the publishers and advertisers.