MITB Banner

Bigtech in Ethical Dilemma

Ethics cannot be an afterthought, but big techs argue otherwise.
Share
Listen to this story

AI is the future, so companies want to embrace it. But what happens when they deploy it without ensuring it serves humans first? We already know the consequences, and we at AIM have covered them extensively. This includes Amazon’s sexist-bot, the robot that broke one of the chess players fingers, or generative AIs going berserk, producing sexist and racist images

This story is an attempt to not call out more such blunders but what companies can learn from each other and ensure that they don’t happen again. In April 2022, the United Nations took up the baton by bringing the UNESCO set of AI Ethics that nearly 190 countries adopted. One would agree that this AI Ethics elicitation is the closest thing we have to a global Ethical AI need. To some, the listing is useful as a backdrop but it is different from what they necessarily wish to utilise directly. 

The reason being, the list is complicated and not easily digested. Moreover, some firms created their AI Ethics guidelines before the UNESCO release and, eventually, decided that their principles were sufficient; thus, there was no need to change their preexisting proprietary approach.

Ethics First, Business Later 

In 2017, Deepmind launched the DeepMind Ethics and Society to understand and explore the real-world impacts of AI. The British-owned subsidiary of Google’s parent company Alphabet believes that ethics cannot be an afterthought. 

The company put their Responsible AI principles into practice around AlphaFold, its groundbreaking AI system that can take the genetic code for a protein and automatically predict the three-dimensional shape that the protein will assume.

From the project’s outset, DeepMind worked with its in-house Pioneering Responsibly team with expertise in ethics and AI safety—to work through possible issues around the release of AlphaFold and its predictions. This included having one ethics researcher. 

Earlier this year, the research firm unveiled Sparrow, a “useful dialogue agent that reduces the risk of unsafe and inappropriate answers”. However, DeepMind considered Sparrow a research-based, proof-of-concept model that is still being prepared to be deployed. The future model is also expected to support multiple languages, cultures and dialects.

DeepMind engages in red-teaming its models—thinking about the nefarious ways someone might use or misuse AI that it is building or how someone might try to break the technology. It also performs what they call “pre-mortems“, where you assume everything goes wrong and then you have to work out why it might have gone wrong.

In 2021, Deepmind’s sister company, Google, formed the Responsible AI and Human Centred Technology (RAI-HCT) to conduct research and develop methodologies, technologies, and best practices to ensure that AI systems are built responsibly—putting their AI Principles into practice at scale. But Google’s ethical research team has been in a state of flux. Several exits were witnessed regarding the ethics of the white tech organisation.  

In an interview with AIM, Pushmeet Kohli,  Deepmind’s head of research for AI for science and reliability said, “I would say that we are probably one of the leading groups in this area, but in terms of sharing and deploying these models, we have been more thoughtful. We are doing a lot of work on safety and security and for the responsible deployment of these techniques.”

Meta’s Balancing Act

Supporting the good cause, Meta AI has also been taking baby steps towards creating responsible services. In the last two years, Meta has announced several plans to collaborate with policymakers, experts and industry partners to build the company’s flagship product, Metaverse, responsibly. Earlier, Facebook (now Meta) introduced facial recognition in 2010. However, after eleven years, when it accumulated over a billion facial recognition profiles, the company disabled the facial recognition system as it attracted significant flak due to privacy concerns globally. 

Meanwhile, out of 11,000 employees that Meta laid off last week, 13% belonged to a research team focusing on machine learning infrastructure called ‘Probability’, which touches upon privacy, integrity and reliability alongside machine learning for people and more. 

Head-in-the-sand approach

Moving to the other side of the spectrum, in September 2022, Meta dissolved its Responsible Innovation team, a group tasked with addressing potential ethical concerns about its products. 

In the same month, Elon Musk was asked at Tesla Day 2022 Q&A session whether the company has been looking at the big-picture aspects of what walking robots will do to society. 

Musk has repeatedly stated that he views AI as an existential risk to humankind. One would likely assume that if one is making robots that will walk amongst us and that he expects perhaps millions upon millions of these robots to be sold for public and private use, it naturally raises humankind’s Ethical AI issues. But Musk’s response to the question suggests that the efforts underway are premature to explore the AI Ethics possibilities notably.

Unfortunately, a head-in-the-sand approach to Ethical AI is bad news. Once the robotic system gets further down the development path, it will become increasingly hard and costly to embrace AI Ethics precepts into the system. This is a shortsighted way of dealing with Ethical AI considerations. AI Ethics is often considered an afterthought topic. Maybe someday it will rear its head, but until then, it is heads-down and full speed ahead.

PS: The story was written using a keyboard.
Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories

Featured

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

AIM Conference Calendar

Immerse yourself in AI and business conferences tailored to your role, designed to elevate your performance and empower you to accomplish your organization’s vital objectives. Revel in intimate events that encapsulate the heart and soul of the AI Industry.

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed