Active Hackathon

Former Google Exec Lists 4 Dangers Associated With AI

Former Google exec Kai-Fu Lee lists the top four risks associated with the use of AI.
Kai-Fu-Lee, AI, AI Risks, AI dangers

The tech world has constantly been talking about the benefits and the potential risks associated with the deployment and advancement of artificial intelligence (AI). American Scientist Kai-Fu Lee, the former President of Google, Greater China and a Corporate VP at Microsoft, recently listed the top four risks associated with artificial intelligence. 

At present, Lee is the President and CEO of Sinovation Ventures, a Chinese VC firm. Previously he served as the Principal Scientist, Director and VP at Apple. Lee lists AI warfare, externalities, inability to predict consequential choices and personal data risks as the top AI dangers — and in that order. 


Sign up for your weekly dose of what's up in emerging technology.

AI-powered Warfare 

“The single largest danger is autonomous weapons,” Lee said. Lee backs his statement by explaining that warfare is the only time when AI can be trained to kill humans. Or when AI can be specifically “trained to assassinate.” For instance, a drone equipped with facial recognition or cell signals can be used to fly itself and seek specific people. 

This is also why in 2015, tech business magnate Elon Musk and Apple co-founder Steve Wozniak, along with thousands of AI researchers, had signed an open letter proposing the ban of autonomous weapons. It had drawn support from 30 countries. However, a report commissioned by Congress advised the US to defy a ban.

Lee said that the deployment and affordability of autonomous weapons would wreak havoc, allowing terrorists to use them to perform genocide. 

“It changes the future of warfare. We need to figure out how to ban or regulate it,” Lee added.  

AI Fixations

According to Lee, the second-highest risk associated with AI includes AI’s unintended negative consequences that remain fixated on one goal, leaving out the rest. That is, when an AI gets so good at the job that it is trained to perform, it ignores the negative impacts or externalities that it may cause. Citing an example, Lee said that when YouTube suggests videos that a user is likely to click on, it can also send potential negative news and influence one’s thinking. 

Another real-life example of this AI fixation problem is one validated by Facebook’s internal research from 2019. The report suggests that photo-sharing social media platform Instagram made 32 per cent of teenage girls feel worse about their bodies. Lee suggested that these fallouts do not affect tech giants as much as it hampers the opinion on algorithms, and in turn, AI.

Inability to explain AI’s decisions

Decisions taken by an AI are crucial, especially in times when human lives are concerned. This directs to decisions such as the thought experiment in ethics, trolley problem. In the fictional scenario, an onlooker has the choice of saving five lives who are in danger of being hit by a trolley by diverting the train to kill one person. An AI model can never explain the decision it makes in a situation like that. 

As far as real-life situations are concerned, artificial intelligence cannot explain its decisions during autonomous driving, medical decision-making, and surgeries. 

Risking Personal Data 

Lee further believes that in the next twenty years, all data will be digitised. While this will boost the use of AI for decision-making and optimisation, it poses a potential risk of risking an individual’s personal data. 

AI is already being used to identify, monitor and track people across devices and at all places. This allows AI to use this personal data for future analyses and predictions that can benefit tech players with access to a humongous amount of this data to make targeted ads and shape our thoughts. Unfortunately, vulnerable data in the hands of tech giants does not sound like good news. 

Lee has also co-authored the book AI 2041: Ten Visions For Our Future.

More Great AIM Stories

Debolina Biswas
After diving deep into the Indian startup ecosystem, Debolina is now a Technology Journalist. When not writing, she is found reading or playing with paint brushes and palette knives. She can be reached at

Our Upcoming Events

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM

Ouch, Cognizant

The company has reduced its full-year 2022 revenue growth guidance to 8.5% – 9.5% in constant currency from the 9-11% in the previous quarter

The curious case of Google Cloud revenue

Porat had earlier said that Google Cloud was putting in money to make more money, but even with the bucket-loads of money that it was making, profitability was still elusive.