MITB Banner

Inside MILA: The Need for ‘Humanity’ in AI

Today, there are plenty of reasons to believe on how technology is used to cause more harm than good.
Share
Listen to this story

A decade ago, deep learning labs started getting all the attention in the world from tech giants, including Google, Facebook (now Meta), Microsoft, and IBM. While the majority of them started to use the technology for their own personal gains, a few of them expressed genuine interest in creating value, enabling customer experience, and more.  

But that wasn’t enough. Today, there are plenty of reasons to believe on how some of these technologies are being used to cause more harm than good. The most popular instance can be conversational AI agents, which are constantly in the news for being unethical and biased towards certain genders or races. 

For instance, Meta’s chatbot BlenderBot3, made several false and jarring statements in its conversations with the public. First, it described Meta’s CEO as “too creepy and manipulative”. Secondly, it claimed anti-semitic conspiracy theories as truth and then went on to state that it still believed that Donald Trump was the US President. 

A month ago, Meta released Galactica. However, just days later, Meta took the model down. When community members started using the all-new AI model, many found that the results were very suspicious. Several took to Twitter to point out that the results presented by Galactica were actually increasingly inaccurate. 

In a bid to bring responsible development of AI, in 2018, a group of computer scientists—including Quebec’s Chief Scientist, Rémi Quirion and Prof. Lyse Langlois—announced the International Observatory on the societal impacts of artificial and digital technologies, an initiative of University of Montreal, or MILA, where they look to spark public debate and encourage a progressive and inclusive orientation to the development of AI

“It is not really my field, but it is really important,” said Yoshua Bengio, explaining that it is important to bring diverse perspectives on things across disciplines, including social science, philosophy, and humanities, while also picking up on what could go wrong and solving problems. 

In line with this, Bengio also emphasised upon the use of computer vision for military purposes—particularly in making killer robots—which he believes probably already exist. “It’s not official, but there is some evidence that there are such drones,” he told AIM, further adding that there is a need for open dialogues and a lot more regulation and legislation so that tools that were built in the labs are more accessible to everyone and not just big techs, or those with power. Bengio also said that establishing principles is good. However, it is important to actually put them to practise. 

‘AI for Good’ Initiatives at MILA 

One of the things Yoshua Bengio was very keen on as a part of the mission of MILA is AI for humanity. In other words, thinking about the social impact that AI entails. Not just the negatives but other aspects such as healthcare, climate change, and education.

In healthcare, investments are low primarily because it is not profitable. Being scientifically prepared with the right tools to face challenges is important. Bengio hopes that the pandemic helps governments understand the importance of investing in AI for social good research, wherein researchers are not merely trying to understand how not to use AI but also where to use it in areas that matter to the society.

Another big area that MILA is invested in is the Carbon Call action. The current carbon price is quite low around the world; there is not enough incentive for companies to do the research that they need to use energy more efficiently.

Irresponsible AI

For the past two years, Google’s ethical research team has been in a state of flux. In February 2022, Alex Hanna wrote a Medium post, stating, “Google is not just a tech organisation. Google is a white tech organisation.” Hanna felt that the company—and the industry as a whole—barely promoted diversity or mitigated the harms its products had caused to marginalised communities. 

This was not the first instance, the crisis began in 2021 when Google fired their star AI researcher Timnit Gebru over an academic paper scrutinising a technology that powers some of the company’s key products. The internet giant has made several efforts to stabilise the department but the chaos still seems unsettled. A few months after Gebru’s exit, Hanna’s next manager, Meg Mitchell, was also shown the door. 

In September 2022, Meta disbanded its Responsible Innovation (RI) team; a group tasked with addressing potential ethical concerns about its products, which include Facebook, Instagram, and WhatsApp. Earlier, Facebook (now Meta) had introduced facial recognition in 2010, and after eleven years when it amassed over a billion facial recognition profiles, Meta decided to disable the facial recognition system as it attracted significant flak due to privacy concerns globally. However, the company has been taking baby steps towards creating responsible services. In the last two years, Meta has announced several plans to collaborate with with policymakers, experts and industry partners to build the company’s flagship product, Metaverse, responsibly.

The debate also surged when OpenAI announced the release of GPT2 in 2019. Several AI researchers criticised the decision, accusing the lab of exaggerating the danger posed by the work and stoking “mass hysteria” about AI in the process. In response to this, OpenAI published a paper in the same year to tackle the issue and have taken steps to forecast the misuse of AI.  

PS: The story was written using a keyboard.
Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories

Featured

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

AIM Conference Calendar

Immerse yourself in AI and business conferences tailored to your role, designed to elevate your performance and empower you to accomplish your organization’s vital objectives. Revel in intimate events that encapsulate the heart and soul of the AI Industry.

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed