How COVID Pandemic Highlighted The Limitations Of AI

How COVID Pandemic Highlighted The Limitations Of AI

Design by How COVID Pandemic Highlighted The Limitations Of AI

As we are evolving towards this unprecedented time of COVID-19, artificial intelligence proved to be one technology that has encompassed almost all aspects of human lives, whether it be healthcare, banking, shopping as well as running businesses amid this crisis. The impact of this pandemic has showcased the significance of this technology, and therefore companies and government entities are catching up their pace with AI-powered solutions. In fact, according to a PwC’s report, it has been noted how this technology can be a game-changer for this era and could contribute up to $15.7 trillion to the global economy by 2030.

A lot of this could be attributed to the popular belief that artificial intelligence has the potential to solve some of the complex real-world business problems. However, despite these incredible outcomes that artificial intelligence harnessed over businesses, the pandemic has highlighted some existing problems related to this technology, including explainability, accuracy, and privacy-related risks.

For instance, Joe Redmon, creator of the popular YOLO computer vision algorithm, has recently tweeted about him leaving his research on computer vision due to the concern of unethical use of AI. In other news, Walmart employees have stated their concern on having a flawed artificial intelligence, which is unable to flag wrongdoings by their customers. 

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

As a matter of fact, highlighting the concerns, the Indian government has recently announced its participation in Global Partnership on AI, aka GPAI or Gee-Pay, along with other nations like the US, UK, EU, Australia, Canada, New Zealand, and the Republic of Korea. This first-of-a-kind partnership is focused towards creating guidelines on the responsible use of AI, including diversity, innovation and economic growth. “By joining GPAI as a founding member, India will actively participate in the global development of artificial intelligence, leveraging its experience around the use of digital technologies for inclusive growth,” noted the statement from the Indian government.

Despite these initiatives, there haven’t been any strict measures to ensure the accuracy of the information, the security of the data, and effectively trained AI models that can support businesses with their problems. Ineffective usage of AI can not only hamper businesses but can also put thousands of lives at risk. And therefore, it is critical for us to know the key limitations of artificial intelligence amid this crisis so that governments and businesses can create a robust framework and essential guidelines to have a responsible usage of this technology. 

Also Read: Artificial Intelligence Essential For Businesses Leaders

Highlighting the constraints of AI that can raise red flags for adopters

This COVID-19 pandemic has created a massive demand for businesses, healthcare providers, as well as government entities to rely on artificial intelligence to enhance their operations. However, one key concern it brings in is the pre-existing prejudice that arises in the AI models while making decisions against a specific section of people. A lot of this could be attributed to the human involved in training those data, which clouds the AI judgement. 

Such problems arise when the AI system that is used for recognising people are trained on a particular type of data set that is not diverse in races, castes, genders and people of different communities, and thus gets reflected in the decisions made by the facial recognition system.

In fact, according to news, with regards to the recent case of George Floyd’s death, it has been noted how despite the improvement, facial recognition technology can still be biased against people of colour, and that’s why the majority of these tech giants like Amazon, Google, Microsoft and IBM has announced their concerns and made a laudable decision of not working on these facial recognition solutions for police authority. These biases — unrepresentative data, inherent structural biases — can also prove to have a negative impact on other sectors like creating content, making employment decisions as well as provide healthcare services.

Also Read: Here’s How To Fight Prejudice In Artificial Intelligence

Case in point — last year, search giant Google’s hate speech detector was scrutinised to have prejudices against people of colour. Experts have accredited this to the inefficient training of their models and biased algorithms. Like IBM, once said in their statement that artificial intelligence is only as useful as the data that is fed into the machine.

In an act to mitigate this problem, Facebook CTO, Mike Schroepfer has also recently spoken about how diverse hiring in the organisation can prove to be beneficial for avoiding biases in training data sets and AI models. “I also think that the real solution to these problems for things like making sure you have a diverse data set is actually the process, understanding of formalising this across the company, so there are statistical methods to determine whether this data set is represented in the ways you care about,” Schroepfer stated to the media.

Further questions arise about how this diversity can be measured and how these models are supposed to be trained. For this researchers have built bias bounties, along with which, Pegasystems a software development company has launched a tool — Ethical Bias Check, which will help businesses and developers to identify and omit the hidden biases of their AI models. Similar to this is the researchers from IIT Madras in collaboration with UK researchers from Queen’s University Belfast have also built a new algorithm that would make AI more fairer while processing data. 

Although it is critical to eliminate these biases, it is much harder than it’s expected. Supporting this, Smitha Ganesh, who is currently working as head of data innovation, India, at Ericsson has extensively mentioned in her talk, how it can be hard to fix AI biases, but it has exceptionally significant value to make informed decisions. And therefore, to avoid these biases, developers must think of the intention and the result for which the product is being created and then curate relevant holistic data to train those models.

Also Read: Why Mitigating AI Biases Is The Need Of The Hour

Another significant concern that AI brings in is the lack of reliable data. With data being the key to run an AI model, it is extremely critical to understand the reality of the data and how it is going to bring in value to make informed decisions. And for this, businesses need to ensure that the data that is fed to the model is not only just sound but also complete with relevant data points. Although many companies, along with government entities, are working to collect relevant data, there is still a considerable gap to fill that between the actual data and the data that is being curated by professionals. 

Alongside, 80% of the world’s data is unstructured, which again could be irrelevant for the model to be trained and therefore it needs to be ensured which data is meaningful and can be used to analyse patterns. For instance, IBM’s AI model — Watson had undergone this issue when it memorised the entire online Urban Dictionary and then couldn’t differentiate between the slangs and the polite words. Such an instance showcases that data needs to be handled with caution, and only useful data should be used to train the models for a better outcome. In the current situation, experts have also urged companies to prevent using pre-COVID data to make their business design as this pandemic can transform many aspects of life and using such flawed data could produce ineffective results.

Issues with data also yield accuracy issues in AI, which is one of the core reasons for deploying artificial intelligence. Accuracy is critical for businesses, especially the ones that are solving some critical real-world problems. With incorrect data, the AI model will produce inaccurate results, which, in turn, will provide unjust services to the customers. For instance, AI proved to be immensely helpful for speeding up the treatment of COVID-19 patients, however, if the models were being trained on flawed or biased data, it would create an inequality in the model which can hamper the treatment of the patients.

In another instance, chatbots have also proved to have a lot of potential amid this crisis, where people are relying on the health information to run their lives, as well as to manage their customers. Several government agencies, as well as businesses, have deployed chatbots on their platforms, however, to provide accurate information by analysing massive resources of data, it needs to be ensured that proper measures are being taken. 

In fact, the World Economic Forum in their blog post has stated how conversational AI is proving its worth amid this crisis, but it is critical to address its challenges like inconsistent responses and inaccuracy. Faults in AI-based results in chatbots can lead to dissemination of wrong information, which can hamper the public’s interest. Similar issues can happen if erroneous data is fed on to an AI-based healthcare system that is used to diagnose x-rays, treat patients, or identify the spread of the virus amid this crisis. And therefore, there is a vital requirement for governments as well as businesses to create a policy framework that can monitor the AI model before its use in the real world.

Also Read: How Businesses Can Adopt Responsible AI Amid The Crisis

In an interview, Zoya Brar – Founder and CEO – CORE Diagnostics stated to the media that it is still debatable as to how much autonomy can be given to AI. She said, “Reliability and safety are vital issues that AI can pose. In cases where AI is deployed to control equipment, deliver treatment and if the same goes, undetected can cause serious implications.”

Along with this, AI is also bringing in privacy concerns as the majority of the companies are using customer data in order to advance their AI systems. This has caused a turmoil among consumers as people are becoming more sensitive about sharing their personal data, including location, interest and banking histories, in order to avoid potential cyber-attacks and leaking of their data to third party companies. Such fear is justifiable amid consumers, as in recent news, Clearview AI, a facial recognition startup has been alleged of using people’s data without their consent

Supporting that, the Arogya Setu app, which has been launched by the Indian government to help citizens with necessary COVID-19 information, has also revised its privacy policy to ensure that users’ data is not shared with any third party apps. Moreover, following the recent decision by tech giants to halt their FRT tools for police authorities, congressional democrats have also released a reform mentioning the limited use of this technology for policing.

Wrapping up

The omission of human intervention, along with immense speed and ease of use has indeed brought AI in the limelight amid this crisis; however, this should not camouflage the limitation and challenges that AI systems bring in along. Issues such as implicit bias, lack of data, inaccurate results, as well as privacy concerns showcase that AI systems aren’t flawless. Nevertheless, it indeed has a lot of potential to transform human lives, and therefore the AI limitations and challenges must be addressed, and the necessary framework should be created for its ethical usage.

Sejuti Das
Sejuti currently works as Associate Editor at Analytics India Magazine (AIM). Reach out at sejuti.das@analyticsindiamag.com

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.