Actions Taken For Ethical AI In 2021

With the biggest technological revolution going on, a lot of good has already been done for society with the help of artificial intelligence. One of the finest examples is the use of AI data analysis to combat the coronavirus outbreak. Companies do provide customised experiences to their customers, harnessing the power of AI. At the same time, they’re scaling their regulatory, reputational and legal risks. Cases such as Los Angeles suing IBM for gathering and selling users’ location data to marketing companies, data scandal by Cambridge Analytica and Facebook, the case of Ryan Abbott and team on AI be granted as an inventor — all these raise ethical concerns associated with AI.

To that end, we are listing down some of the crucial steps taken towards ethical AI in the year 2021 for an inclusive, accountable and transparent future ahead.

Global Agreement on Ethics of AI

All 193 member states of the United Nations Educational, Scientific and Cultural Organization (UNESCO) have adopted a historic agreement that defines common principles and values required to ensure the healthy development of artificial intelligence. The adopted text marks a major step towards guiding the construction of the required legal infrastructure to ensure the ethical development of AI technologies. Further, it will help to reduce the risks it entails.


Sign up for your weekly dose of what's up in emerging technology.

The world has seen an increased gender and ethnic bias, significant threats to privacy, dangers of mass surveillance, dignity and agency, and increased use of unreliable artificial intelligence technologies in law enforcement, to name a few. As of now, there were no universal standards to provide an answer to these issues, as per UNESCO, in a statement.

Tech Giants saying “No” to Unethical AI Projects

Three of the leading tech players, namely IBM, Google and Microsoft, have turned down projects shadowed by ethical concerns. Google cloud experts agreed not to move forward with the idea of creating AI for financial institutions on making decisions for lending money. The project is put on hold until the concerns regarding gender and racial biases are resolved. Microsoft, too, limits the use of its software that mimics voice amid concerns of using the technique for creating deep fakes. Similarly, sensing the possibility of misuse, IBM discontinued its face recognition services altogether. 

Download our Mobile App

The earlier trend with the tech giants releasing AI technologies such as facial recognition and chatbots directly in the market without any due diligence for potential biases or downsides has seen a reversal. As more people and ethical AI organisations have begun to speak out against AI’s ethical concerns, tech players have formed ethics committees to examine their new products.

European Commission’s Approach towards Trustworthy AI

To develop human-centric and secure AI for its people, the European Commission has proposed the first legal framework on AI, with experts equating it with the General Data Protection Regulation, aka GDPR. With a risk-based approach, the proposed regulation classifies AI into four categories – unacceptable risk, high risk, limited risk and minimal risk. The deciding factor behind the grouping depends upon the scale and extent of bias or risks associated with the technology.

The concerns are real; take, for instance, the case of a Detroit man who was wrongfully taken into custody for shoplifting in a fiasco related to facial recognition. However, the US government has asked the EU to oversee that AI must not be overregulated.

Way forward

The battle to become an AI superpower is enticing right now, with every country vying for dominance in the field through a technological breakthrough. As a result, countries such as China, the United States, Japan, Canada, Singapore and France, to name a few, are investing heavily in AI research and development. However, there are currently no norms or standards in place for ethical AI research, design, or use.

The time is well-suited to look to the fact that machines making decisions related to individual rights must be able to explain their decision, and if objected, can be reviewed by competent human authority. In addition, companies need to be transparent while deploying AI, people should be informed about any such usage by the companies, and an effective redressal mechanism must be put in place to address any discriminatory approach.

Support independent technology journalism

Get exclusive, premium content, ads-free experience & more

Rs. 299/month

Subscribe now for a 7-day free trial

More Great AIM Stories

kumar Gandharv
Kumar Gandharv, PGD in English Journalism (IIMC, Delhi), is setting out on a journey as a tech Journalist at AIM. A keen observer of National and IR-related news.

AIM Upcoming Events

Early Bird Passes expire on 3rd Feb

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

All you need to know about Graph Embeddings

Embeddings can be the subgroups of a group, similarly, in graph theory embedding of a graph can be considered as a representation of a graph on a surface, where points of that surface are made up of vertices and arcs are made up of edges