MITB Banner

How Businesses Can Adopt Responsible AI Amid The Crisis

Share

How Businesses Can Adopt Responsible AI Amid The Crisis

Illustration by How Businesses Can Adopt Responsible AI Amid The Crisis

The COVID-19 pandemic has brought in a limitless potential for deploying artificial intelligence in the society. Businesses across sectors, along with the government of different countries, are relying on this new technology to battle against this pandemic. The opportunities are significant where companies are deploying AI to enhance their customer service and sales operation during this crisis. According to a survey, 85% of business leaders believe that artificial intelligence will significantly change the way businesses will operate in the post-pandemic world.

In recent news, Indian tech giants like TCS, Wipro, and HCL have announced their participation in using artificial intelligence in clinical development and drug discoveries. Alongside, several state governments such as Kerala, Karnataka, UP, Tamilnadu are collaborating with startups to invest in AI-based tools to fight the pandemic. Besides, many businesses are also deploying this advanced technology to enable their businesses to function by using chatbots, HR-tech, and other AI techniques to enhance their customer engagement.

For instance, the co-founder of Articbot, a conversational AI startup offering chatbots to its customers — Rahul VP said to the media, “During this outbreak, enterprises are concentrating more on customer retention than sales. We have seen more than a 100% increase in sales of our chatbot over the last two months.”

However, the big potentials of artificial intelligence also bring in a considerable amount of risk of having a responsible AI in your organisation along with privacy, security and bias issues. Although studies show that 80% of companies will go out of business by 2025 if not deployed artificial intelligence, it is a need of the hour for leaders to understand its implications, explainability, as well as responsibility AI has towards society. 

In fact, to promote ethical use of this new technology, earlier this year, Vatican officials have signed a pledge with tech giants Microsoft Corp and IBM. This initiative was taken to make sure that the deployment of artificial intelligence is respecting privacy, and is working without bias. With this deal, companies were also focusing on creating awareness among organisations to set strict guidelines to use AI.

In addition to that, Microsoft has also recently, in its Build developers conference, has strongly emphasised on creating new tools to build a more responsible and fairer AI system. Previously, the company shared its AI principles to its customers including — fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability.

Such initiatives by the IT giants have created a revolution as various companies have started to recognise the importance of responsible AI. According to a PwC report, 61% of organisations, across the globe, are already working on creating transparent and explainable AI models followed by 55% of those respondents are creating AI systems that are ethical as well as understandable. 

Ways Businesses Can Create A Framework For Responsible AI Amid Crisis

Although businesses are embracing artificial intelligence to have business continuity, the implementations are usually complex and can bring in unexpected risks. While companies are deploying this new technology, they also have to be accountable for the implications their AI models are bringing on to the society, that includes, privacy, security, biases and credibility. And that’s what creates the obligation for the businesses to develop a clear and strict framework against the usage of artificial intelligence. 

In fact, in recent news, Pegasystems, a software development company, has announced an Ethical Bias Check, which will help companies in identifying and omitting the hidden biases of their AI models. Rob Walker, the VP of decisioning & analytics at Pegasystems, said to the media, “As AI is being embedded in almost every aspect of customer engagement, certain high-profile incidents have made businesses increasingly aware of the risk of unintentional bias and its painful effect on customers.” Therefore, companies should put effort into creating ethical AI, which can help them in adding value to customer interaction. In this article, we are going to share a few ways that companies can create a guideline towards responsible AI.

Establishing Transparency Of Artificial Intelligence

In order to have responsible AI in organisations, business leaders need to create strong governance along with a clear set of guidelines, including the transparency of inner workings of the technology and explainability of models arriving at a particular decision. The framework for the AI guideline should be synced to the value and mission of the business and should also have some regulatory constraints. Majority of the companies work with black box AI, where algorithms and operation of the model are not visible or explainable to customers and stakeholders, which in turn creates distrust. 

Case in point, Apple’s credit card issue, where the model has been scrutinised to have the sexiest loan assessment. Therefore, business leaders should be proactive in creating transparent algorithms, policies on biases and explainable decision making. Apart from transparency, a responsible AI model should also be accountable for the quality and accuracy of the insights that have been provided. With better accountability, companies can leverage the potential of AI in creating a strong relationship with their customers.

Create An Ethical Design Of AI

Businesses should ensure to implement solutions for their customers that are designed with responsible AI guidelines to have the desired business outcome. A human-centric AI design will have the ability to analyse objectively and will help humans in identifying the biases attached. In the majority of the cases, artificial intelligence is beneficial for business leaders to make data-driven decisions, however, in some cases, it mimics human behaviour and comes up with biased decisions, which in turn can adversely affect the businesses. 

Apart from providing excellent services to customers, companies also need to empathise with customer requirements and build a fair AI. For this, businesses need to train their models well on variables like gender, age, and race, and should also evaluate the implication of their business strategies on their customers. With employees integrating machine learning in their workflow, leaders must ensure that no unintentional biases are included in their model. 

Alongside, it also demands a diversity in the workforce that will not only help in omitting the biases but also in building responsible AI tools. According to Rumman Chowdhury, who is the lead of Responsible AI at Accenture, once said to the media, that, “Successful governance of AI systems need to allow ‘constructive dissent’ — that is, a culture where individuals, from the bottom up, are empowered to speak and protected if they do so. It is self-defeating to create rules of ethical use without the institutional incentives and protections for workers engaged in these projects to speak up.”

Create A Regular Check Of Your AI Model Performance

Once businesses have created a robust framework by eliminating biases and including transparency and explainability, it is now essential to keep a regular check of your AI model against the set of guidelines that have been created. Although businesses use accurate data and train their models well against biases, many times, different modelling techniques and approaches will provide different results on the same data. And therefore it is crucial to have a continuous check of the model performance as well as the fairness of the model should also be monitored regularly to ensure that no new set of biases have emerged with the integration of new data. 

Monitoring for biases will increase not only the integrity of the model and make it more ethical, but also the accuracy of the model. Businesses use several AI policing algorithms, and therefore these AI models should indeed be continuously monitored for businesses to have real-time visibility into the black box of AI and will help them understand how their models are operating for their problems. In fact, in recent news, Google researchers have released a machine learning fairness gym for developing and caring reinforcement learning algorithms. The company has released this toolkit to track the societal impacts of this new technology.

Create Awareness Among Employees And Re-Skill Them

Last but not least, business leaders need to make sure that the users and the makers of your AI model, i.e. the employees should be aware of responsible usage of AI. In a report, it has been stated that 74% of employees in organisations believe that it would be vital for them to develop new skills to work with this new technology of artificial intelligence. And therefore, it has become imperative for companies to help in re-skilling their employees with the new work style of artificial intelligence. 

For this, organisations need to create training programs, workshops, seminars, and provide toolkits to their employees for them to understand better how these AI models operate and help in making business decisions. Alongside, AI conversations shouldn’t be limited to data scientists; business leaders should also understand the importance of democratising artificial intelligence across their organisation, which will, in turn, help employees to create a mindset and uphold organisation’s AI commitments. Not only AI ethics should be presented at the core values of the company but should also promote it regularly among employees.

Share
Picture of Sejuti Das

Sejuti Das

Sejuti currently works as Associate Editor at Analytics India Magazine (AIM). Reach out at sejuti.das@analyticsindiamag.com
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.