Advertisement

Council Post: Why Responsible AI in Computer Vision is Imperative

Organisations and people must take the initiative in utilising computer vision and facial recognition in an ethical and responsible manner until governmental bodies are able to effectively regulate these developing technologies. A fundamental key is to build responsibly and only with the goal to serve the purpose.
Listen to this story

In the 1960s, early academics advocated artificial intelligence as a technology that could change the world and were incredibly hopeful about the future of these connected sciences. Years later, Grand View Research reported that, in 2020, the worldwide market for computer vision was worth $11.32 billion and projected to grow at a rate of 7.3% annually from 2021 to 2028. While AI application has become pervasive across industries, the next decade will see rapid advancement of AI-driven computer vision technology. With the expansion of its applications and use-cases, it will bring to fore the need for governance, responsible use and ethical standards. 

Computer vision is a field of artificial intelligence that enables computers and systems to derive meaningful information from digital images, videos and other visual inputs. Until recently, computer vision only worked in limited capacity but the field has been able to take great leaps in recent years, surpassing humans in some tasks related to detecting and labelling objects. With advancements in AI innovation in disciplines of deep learning and neural networks, the scope will only grow further.

But, as with all technologies, computer vision also comes with its own set of challenges. Let’s delve into some areas where computer vision still needs work.

Addressing bias in computer vision

Computer vision is increasingly being used in facial recognition, object detection, and autonomous driving. It is crucial that this technology is fair and unbiased so that it can be used in a way that benefits everyone in society. However, a major challenge in achieving fairness and eliminating bias is the fact that these systems are trained on large datasets, which often reflect the biases and prejudices of the society in which they were created. If a dataset used to train a facial recognition system is mostly composed of images of a certain ethnicity of males, the system may perform poorly when attempting to recognise faces of people from other racial or gender groups. For instance, according to a journal in Lancet Digital Health, AI systems designed to diagnose skin cancer were shown to be less reliable as in the datasets used to train the model, there were no photographs of people with an African, African-Caribbean, or South Asian heritage.

To address this challenge, it is important to develop more diverse and inclusive datasets that reflect the full diversity of human experience. Additionally, algorithms should be designed to actively detect and correct biases in the data and in the models themselves.

Privacy risks of AI-powered computer vision

Government and public sector applications are rapidly utilising artificial intelligence and computer vision due to their numerous advantages. For instance, in order to monitor traffic, road conditions, and public events, governments deploy computer vision technologies to create smart cities. Real-time visual data that is acquired by cameras is gathered and processed by computer vision systems. The video feed is initially recorded and delivered for data processing and analysis to a system on-site, edge devices, or a cloud-based storage system. Following this, computer vision applications process the raw data using deep learning models to carry out tasks such as human detection, object detection, and counting. Additionally, data sent to the cloud is frequently kept there for at least a while in accordance with data retention policies or legal requirements. 

The collection, storage and handling of such volumes of identifiable data presents a huge risk of data breaches or misuse and therefore invasion of people’s privacy. Most businesses have rules governing who has access to the data and what they may do with it. These rules may allow third-party cloud service providers or vendors to access, share, or sell such data for marketing purposes, further raising the risk for data privacy misuses.

Environmental implications of AI-powered computer vision

Processes involved in training and using computer vision systems can have negative environmental impact. Deep learning model training consumes a substantial amount of processing power which, in turn, consumes a lot of energy. These models require enormous quantities of electricity that is often produced using fossil fuels in the data centres that house them.

Powerful hardware, such as GPUs, is needed for computer vision systems, albeit with a short lifespan and in need of frequent upgrades. Owing to such setups, there is a significant volume of electronic waste that becomes challenging to recycle. Massive volumes of data are produced by computer vision applications and this data must be saved for further processing and analysis. This necessitates a large quantity of storage space, which uses more energy and may result in greenhouse gas emissions.

Responsible build and use of computer vision

While computer vision applications are in their nascent stage, they still have a long way to go. Organisations and people must take the initiative in utilising computer vision and facial recognition in an ethical and responsible manner until governmental bodies are able to effectively regulate these developing technologies. A fundamental key is to build responsibly and only with the goal to serve the purpose. Putting this into perspective: If the requirement is to only count footfalls, computer vision should be deployed to only count and not for facial recognition simply because the technology allows it. Similarly, those with access to the data should uphold ethical practices and engage in responsible handling. Only then can we reap the full benefits of this technology without endangering people’s privacy and safety.

This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill out the form here.

Download our Mobile App

Swaroop Shivaram
Swaroop has 17+ years of rich experience in IP based video cameras, surveillance and computer vision technology building & managing enterprise scale video solutions. He has been a passionate evangelist of video analytics technology and has 2 patents in this domain. He is involved in incubating & transforming retail organisations using AI/ML computer vision technology. He is currently leading the computer vision platform at Lowe’s as Director - Data Science.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Bangalore

Future Ready | Lead the AI Era Summit

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

20th June | Bangalore

Women in Data Science (WiDS) by Intuit India

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR