DeepMind’s Scientist Sparks Fresh Debate Over AI Ethics

A near-perfect amalgamation of robotics and artificial intelligence robot, Sophia, developed by Hanson Robotics, sent shivers down the spine when she sarcastically replied to a question in an interview with “Okay, I’ll destroy humans”. 

Sophia was displaying humour and sarcasm, but that single comment erupted adverse reactions against the self-learning of machines. People saw her ‘rouge similarity to humans’ as a threat. The concern around ‘machines turning onto humans’ is the most prominent ethical challenge artificial intelligence faces today. 

Assistance, innovation, decisions, fraud detection, and crime oversight– a scientific researcher would explain these when asked about the benefits of AI. However, Raia Hadsell– Researcher at DeepMind, believes that the same researchers would appear hesitant when asked about risks and ethical issues associated with AI.  


Sign up for your weekly dose of what's up in emerging technology.

At the recent Lesbians Who Tech Pride Summit, Raia spoke about issues plaguing the field of AI and actions that need to be taken to ensure its ethical deployment.

Policymakers, lawyers, the Judicial system, ethicists, and philosophers play a critical role in maintaining ethics in AI. Still, it is more significant for researchers building models to explain how their innovations are ethically sound. The ethical implications of every research should shed light on its scope of threat due to self-intelligence. 

Raia shared her experience of how she received backlash in her community for bringing researchers into the fold of ethics. In 2020, Raia was invited to be one of the four program chairs of NeurIPS– the largest and the most prestigious AI conference in the world. Although there was exponential growth in the number of attendees and papers submitted over the last decade, no ethical guidelines were provided to the authors before last year. 

When Raia was invited to design the review process for the 10,000 papers that were expected last year, she initiated two significant changes. First, she hired a pool of ethical advisors to share informed feedback on reports that seemed to be controversial. Second, she required all authors to submit a broader impact statement with their work, discussing the potential positive and negative future impacts and mitigations, if possible. 

The idea of an impact statement was not new– it is a common requirement in scientific fields of medicine and biology. However, Raia’s community did not welcome the change. She said that she “Didn’t make a lot of friends,” and that there were some tears involved. However, later authors reached out to her to inform her how it was a valuable experience inspiring new directions for research. 

Google, DeepMind’s sister company, has recently been in troubled waters after firing co-leads Timnit Gebru and Margaret Mitchell. Google fired the duo over email for not rescinding its research about the risks of deploying large language models. This move by the tech giant raised a wave of backlash against Google for promoting unethical AI. 

Common ethical issues with AI-based technology

The most commonly used AI-based technology is facial recognition. But, unfortunately, it is also the most error-prone technology. A study by Joy Buolamwine at the MIT Media Lab reports that facial recognition algorithms show errors based on skin colours. 

In people of caucasian descent, the algorithms work fine 99 percent of the time. On the other hand, in people of African descent, the algorithm shows an error rate of 35 percent. This error in algorithms can lead to racial conflicts.

Researchers are also sceptical about AI applications in data mining, owing to users’ privacy concerns– they are concerned about the possible chances of data theft by advanced machine learning systems. Massive data compromises and leaks made from Facebook to holiday-booking websites make it to the headlines every year, putting the privacy of every internet user at stake. 

Questions arise from the ethical applications of AI– if self-learning robots are slaves to do humans’ bidding; if AI is an advanced conscience with synthetic life of its own; or if self-learning algorithms enjoy the same freedom as humans. Unfortunately, there is no concrete answer to these questions– they can range from philosophical to scientific to arguments based on law. 

Raia has shed light on a frequently ignored domain of AI while bringing it under the spotlight and building more support in the community regarding its implications to make it safer for humans.

More Great AIM Stories

Meenal Sharma
I am a journalism undergrad who loves playing basketball and writing about finance and technology. I believe in the power of words.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

Conference, in-person (Bangalore)
Cypher 2023
20-22nd Sep, 2023

3 Ways to Join our Community

Whatsapp group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our newsletter

Get the latest updates from AIM