Finally, Tech Giants Are Turning Down Unethical AI Projects

Tech giants are increasingly saying no to technologies that involve facial recognition, voice mimicking software and emotion analysis.
Ethical AI

The pros of artificial intelligence technology have always been followed up with the cons of leveraging these technologies. With the launch of every new system that requires users to share personal and biometric data, a school of thought has emerged voicing the ethical and privacy concerns of using the systems. A recent investigative report by Reuters sheds light on how the three US tech giants — Google, IBM and Microsoft — have been resisting and turning down projects on account of ethics concerns. 

Reuters interviewed the AI ethics chiefs at the three tech companies to establish that all the technologies were restricted by panels consisting of executives and leaders. The report emphasises how these tech giants are paving a way of balance — to leverage lucrative technologies while considering their social responsibilities. These tech giants are increasingly saying no to technologies that involve facial recognition, voice mimicking software and emotion analysis. 


Sign up for your weekly dose of what's up in emerging technology.

Managing Director for Responsible AI at Google Cloud, Pizzo Frey, said that their (the panel’s) job was to maximise opportunities and minimise the harms of AI. 

Google’s Dilemma 

Google Cloud has been an expert in developing AI tools to help financial institutions such as HSBC and Deutsche Bank detect abnormal transactions and thefts. However, according to Reuters, In September 2020, while Google Cloud was gearing up to help yet another player in the financial sector, it decided to turn down the offer. 

After weeks of internal discussions, experts at Google Cloud decided not to create an AI to help the financial institution make choices related to lending money, on accounts of the project being ethically dicey. Despite betting on the potential growth that the AI-backed credit scoring industry promised to have on Google, the AI could preserve racial and gender bias from the data and the patterns it would be trained on. Google said that it will skip all deals with financial institutions and service providers related to creditworthiness until the concerns are resolved

Additionally, starting last year, Google has also blocked AI features that include analysing one’s emotions, fearing cultural insensitivity.

Microsoft restricts Unethical Tech 

Microsoft had restricted the use of software that mimics voices after a panel debated over the topic for more than two years. The company had to counter the benefits that its voice mimicry text provided to restore impaired people’s speech against the potential risks of the technology being used to create deep fakes. That is, people could use voice mimicry technology to impersonate others without their consent. 

However, in February this year, panellists who were human rights, data science and engineering specialists gave it a green flag highlighting the advantages of AI technology. However, they have restricted the use of the technology. The user or subject’s consent must be verified before using the technology, and a team — ‘Responsible AI Champs,’ has to be trained on the policy of approved technology purchases. 

IBM joins the League

Last year, with the pandemic’s on-setting, internal sources revealed to Reuters that IBM rejected a client requesting an advanced facial recognition system to spot fever and masks in people. Almost six months later, IBM announced that it was discontinuing its face-recognition service altogether. 

IBM leaders are also discussing the possibility of unethical use cases of implants and wearables that wire computers to brains. The AI Ethics Board at IBM has thought through the possibility of hackers manipulating the thoughts of people wearing neuro-implants that are originally meant to benefit the physically impaired. 

What does it mean? 

Almost five years ago, tech companies were releasing AI systems and services, including chatbots and photo-tagging, without giving much thought to ethics, bias, or potential harms. However, with more people and ethical AI groups starting to raise their voices against ethical concerns of AI, tech companies have formed ethics committees (Microsoft in 2017, Google and IBM in 2018) to review their new launches. 

The US and other overseas governments believe that placing certain restrictions on using these AI technologies will prove beneficial. In April, the European Union announced that it was considering a ban on the ‘unacceptable’ use of AI — for mass surveillance and to calculate social credit scores. As we advance, it is unclear whether these tech giants would wait for a resolution to differentiate the good from the bad or continue to push AI’s potential further. 

Earlier, DeepMind researcher Raia Hadsheel shed light on ethics in AI. Read the story here

More Great AIM Stories

Debolina Biswas
After diving deep into the Indian startup ecosystem, Debolina is now a Technology Journalist. When not writing, she is found reading or playing with paint brushes and palette knives. She can be reached at

Our Upcoming Events

Masterclass, Virtual
How to achieve real-time AI inference on your CPU
7th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, Virtual
Deep Learning DevCon 2022
29th Oct

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM

What is Direct to Mobile technology?

The Department of Technology is conducting a feasibility study of a spectrum band for offering broadcast services directly to users’ smartphones.