MITB Banner

Why Responsible AI is Just Fluff Talk for Microsoft, Others

The move is a big red flag that exposes Big Tech’s hypocritical intentions around responsible AI.

Share

Microsoft is Hell Bent on Bringing AI to Windows
Listen to this story

In February, at the launch of the new Bing in Redmond, Washington, Microsoft CEO Satya Nadella emphasised upon Microsoft’s allegiance to their AI principles. ‘It’s about being clear-eyed about the unintended consequences of any new technology. We’re very grounded in what’s happening in the broader world,’ he stated. It seemed like, to Nadella, it was very important to remain alert about the liabilities of the stream of AI products Microsoft was releasing in partnership with Sam Altman’s OpenAI

Microsoft layoffs in Responsible AI

However, a recent report by Platformer revealed that Microsoft’s entire Responsible AI team had been laid off as a part of the latest layoffs that had affected 10,000 employees. While Microsoft still has the main governing body called the Office of Responsible AI to draft the regulations for the company’s AI initiatives, the outlet spoke to employees who said that the ethics and society team was crucial in ensuring that Microsoft’s responsible AI principles were actually a part of the product design. 

The move is a big red flag that exposes Big Tech’s hypocritical intentions around responsible AI. This time is particularly imperative for Microsoft to focus on ethical safety given the wild release of OpenAI LLMs in the world. The Platformer also reported that the team had been working on pinpointing risks in OpenAI’s tech integrated in their product packages. 

What’s even more worrying is that The Platformer obtained an audio recording of a meeting where John Montgomery, the Corporate Vice President of AI said that there was pressure from the top management to move quickly. ‘The pressure from CTO Kevin (Scott) and Satya (Nadella) is very, very high to take these most recent OpenAI models and the ones that come after them and move them into customers hands at a very high speed,’ he said. 

John Montgomery, Corporate Vice President, Program Management, AI Platform at Microsoft

Nadella’s company does have three teams even now working on the safety controls—RAISE (Responsible AI Strategy in Engineering), the Office of Responsible AI and advisory group, ‘Aether’. But given the accessibility of LLMs to the general public and the misinformation spread by hallucinations which have become commonplace, any cuts to responsible AI teams are worth noting. 

Twitter undervalues Ethical AI teams

In November last year, when Twitter started mass layoffs, the company’s Ethical AI team was one of the first to go. The microblogging site’s ethics team, called Machine Learning, Ethics, Transparency and Accountability, or META, was formed only last year to detect any potential biases and harms in their algorithms. In the brief period of its existence, META had made impactful changes in Twitter. 

For instance, the team stopped using an automated cropping algorithm after the members found racial bias in it. Musk had also let go of the company’s entire Human Rights team which did the critical job of investigating any politicians or thought leaders who were abusing the platform. 

Meta absorbs Responsible AI team

A couple of months prior, Zuckerberg’s Meta disbanded their Responsible Innovation Team which had been formed to look into the ‘potential harms to society’ done by Facebook’s products. 

Earlier in June, in an attempt to cut costs, Meta consolidated its Responsible AI group into its Social Impact team and FAIR—now renamed as Fundamental AI Research from Facebook AI Research—joined their Reality Labs unit as well. 

At times, the conflict within companies has come into public view. In 2020, Google’s firing of ethical AI researcher Timnit Gebru grabbed headlines everywhere. Gebru’s critical work around LLMs gained even more popularity as a result of the spat with multiple top leaders within the team departing after her. 

In comparison with Hugging Face

Interestingly, another co-author of the paper, Margaret Mitchell—who was also fired—moved to Hugging Face and went on to join open-source platform Hugging Face following the incident. 

Coincidentally, platforms like Hugging Face have been at the forefront of taking up the mantle of ‘responsibility’. Admittedly, Hugging Face is a very different creature compared to the Big Tech companies—it builds AI models and deploys even more. But there is a stricter adherence to ethics that is apparent. In September last year, Hugging Face partnered with ServiceNow’s R&D division to launch a project to develop state-of-the-art AI systems for coding in an ‘open and responsible’ way. 

Hugging Face also came up with ‘OpenRAILs’, a new class of AI-specific licences that push open access, use and distribution of AI products in a responsible manner. 

In an interview with Analytics India Magazine, the platform’s tech and regulatory affairs counsel, Carlos Munoz Ferrandis said, ‘You can already see Meta releasing its big models, such as OPT-175B under RAIL licences, enabling restricted access for research purposes and including a responsible use clause with specific restrictions based on the potential of the licensed artefact’. 

Big Tech may be racing and failing on its way to the top but the approach of companies like Hugging Face may be a valuable lesson to just pause and pay attention. 

Share
Picture of Poulomi Chatterjee

Poulomi Chatterjee

Poulomi is a Technology Journalist with Analytics India Magazine. Her fascination with tech and eagerness to dive into new areas led her to the dynamic world of AI and data analytics.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India