Why Responsible AI is Just Fluff Talk for Microsoft, Others

The move is a big red flag that exposes Big Tech’s hypocritical intentions around responsible AI.
Microsoft is Hell Bent on Bringing AI to Windows
Listen to this story

In February, at the launch of the new Bing in Redmond, Washington, Microsoft CEO Satya Nadella emphasised upon Microsoft’s allegiance to their AI principles. ‘It’s about being clear-eyed about the unintended consequences of any new technology. We’re very grounded in what’s happening in the broader world,’ he stated. It seemed like, to Nadella, it was very important to remain alert about the liabilities of the stream of AI products Microsoft was releasing in partnership with Sam Altman’s OpenAI

Microsoft layoffs in Responsible AI

However, a recent report by Platformer revealed that Microsoft’s entire Responsible AI team had been laid off as a part of the latest layoffs that had affected 10,000 employees. While Microsoft still has the main governing body called the Office of Responsible AI to draft the regulations for the company’s AI initiatives, the outlet spoke to employees who said that the ethics and society team was crucial in ensuring that Microsoft’s responsible AI principles were actually a part of the product design. 

The move is a big red flag that exposes Big Tech’s hypocritical intentions around responsible AI. This time is particularly imperative for Microsoft to focus on ethical safety given the wild release of OpenAI LLMs in the world. The Platformer also reported that the team had been working on pinpointing risks in OpenAI’s tech integrated in their product packages. 

What’s even more worrying is that The Platformer obtained an audio recording of a meeting where John Montgomery, the Corporate Vice President of AI said that there was pressure from the top management to move quickly. ‘The pressure from CTO Kevin (Scott) and Satya (Nadella) is very, very high to take these most recent OpenAI models and the ones that come after them and move them into customers hands at a very high speed,’ he said. 

John Montgomery, Corporate Vice President, Program Management, AI Platform at Microsoft

Nadella’s company does have three teams even now working on the safety controls—RAISE (Responsible AI Strategy in Engineering), the Office of Responsible AI and advisory group, ‘Aether’. But given the accessibility of LLMs to the general public and the misinformation spread by hallucinations which have become commonplace, any cuts to responsible AI teams are worth noting. 

Twitter undervalues Ethical AI teams

In November last year, when Twitter started mass layoffs, the company’s Ethical AI team was one of the first to go. The microblogging site’s ethics team, called Machine Learning, Ethics, Transparency and Accountability, or META, was formed only last year to detect any potential biases and harms in their algorithms. In the brief period of its existence, META had made impactful changes in Twitter. 

For instance, the team stopped using an automated cropping algorithm after the members found racial bias in it. Musk had also let go of the company’s entire Human Rights team which did the critical job of investigating any politicians or thought leaders who were abusing the platform. 

Meta absorbs Responsible AI team

A couple of months prior, Zuckerberg’s Meta disbanded their Responsible Innovation Team which had been formed to look into the ‘potential harms to society’ done by Facebook’s products. 

Earlier in June, in an attempt to cut costs, Meta consolidated its Responsible AI group into its Social Impact team and FAIR—now renamed as Fundamental AI Research from Facebook AI Research—joined their Reality Labs unit as well. 

At times, the conflict within companies has come into public view. In 2020, Google’s firing of ethical AI researcher Timnit Gebru grabbed headlines everywhere. Gebru’s critical work around LLMs gained even more popularity as a result of the spat with multiple top leaders within the team departing after her. 

In comparison with Hugging Face

Interestingly, another co-author of the paper, Margaret Mitchell—who was also fired—moved to Hugging Face and went on to join open-source platform Hugging Face following the incident. 

Coincidentally, platforms like Hugging Face have been at the forefront of taking up the mantle of ‘responsibility’. Admittedly, Hugging Face is a very different creature compared to the Big Tech companies—it builds AI models and deploys even more. But there is a stricter adherence to ethics that is apparent. In September last year, Hugging Face partnered with ServiceNow’s R&D division to launch a project to develop state-of-the-art AI systems for coding in an ‘open and responsible’ way. 

Hugging Face also came up with ‘OpenRAILs’, a new class of AI-specific licences that push open access, use and distribution of AI products in a responsible manner. 

In an interview with Analytics India Magazine, the platform’s tech and regulatory affairs counsel, Carlos Munoz Ferrandis said, ‘You can already see Meta releasing its big models, such as OPT-175B under RAIL licences, enabling restricted access for research purposes and including a responsible use clause with specific restrictions based on the potential of the licensed artefact’. 

Big Tech may be racing and failing on its way to the top but the approach of companies like Hugging Face may be a valuable lesson to just pause and pay attention. 

Download our Mobile App

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR