Meta Wants to Build Generative AI But Not Responsibly

Meta has done a little reshuffle, relocating its Responsible AI team to join the generative AI crew.

Design by Nikhil Kumar

Listen to this story

Meta has been grabbing headlines, and not the good kind. In the recent turn of events, the company has done a little reshuffle, relocating its Responsible AI team to join the generative AI crew. The move comes off as misjudged, given the series of unfortunate events that have unfolded at Meta in the past few months. 

Its social platforms are causing all sorts of headaches – from tagging some Palestinian Instagram users with the label “terrorist” to WhatsApp’s AI whipping up wacky sticker with certain prompts, and even Instagram’s algorithms unintentionally helping folks stumble upon child sexual abuse materials.

In the grand scheme of things during Meta’s big layoffs back in May, it pulled the plug on a fact-checking project that had taken a good six months to put together. Insider info suggests that it wasn’t the smoothest move.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

David Harris, a senior product lead in charge of the Metaverse project, spilled the beans in a June blog post. He painted a picture of concern, wearing his AI researcher hat, and laid it out in The Guardian. “Sadly,” he said, “the civic integrity team I was part of got the boot in 2020, and with all these rounds of layoffs, I’m worried the company’s ability to combat these issues has taken a hit.”

This team’s been through the wringer before, with a reshuffle earlier this year that left the Responsible AI team looking more like a ghost of its former self, according to Business Insider. Reports even hinted that the team, born in 2019, had limited say-so and had to jump through hoops of stakeholder negotiations to get anything done. Not to mention, last September, Meta bid farewell to its Responsible Innovation Team, a group meant to tackle “potential harms to society”. 

All About Generative AI 

Meta’s decision to disband its Responsible AI team could be a bit of a tricky move. On the one hand, having a separate crew for Responsible AI means they can do their thing independently from the folks creating the tools they need to check.

But here’s the rub – these researchers tend to jump into action a tad too late. If they were in on the game early in the development process, they could catch some of the problems right out of the gate.

Take, for instance, Meta’s brainchild, Galactica – a ChatGPT-like model for scientific research. It hit the scene, but three days later, it was lights out. Why? Well, turns out, it couldn’t tell the difference between truth and make-believe. That’s a bit of a hiccup for a language model meant to whip up scientific text. People discovered it was cooking up fake papers, throwing in real authors’ names, and even cranking out Wiki articles on the interstellar history of bears.

As Meta keeps churning out these AI models, most of the Responsible AI team is making a move to join Meta’s generative AI squad. The company’s been slashing costs left and right, hitting up departments all over, including the responsible team. But Meta’s eyes are still locked on the prize, which is all about that generative AI game.

Year of Inefficiency 

At the start of 2023, Mark Zuckerberg, the head honcho at Meta, said, “Our management theme for 2023 is the ‘Year of Efficiency’ and we’re focused on becoming a stronger and more nimble organisation,” as part of the release of Meta’s fourth-quarter earnings report.

But then things got a bit rocky. They started laying off a bunch of people, there was the famous LLaMA leak and now the splitting of people building AI responsibly. 

For a long time, Meta was like a superhero in Silicon Valley when it came to safety and ethics. But in May, a bunch of people got the boot in a big company shake-up. Former trust and safety employees felt like their jobs were always on the line, and their managers didn’t always get how important their work was for Meta’s success.

Adding to the brouhaha, the recent big switch-up of teams dealing with safety and AI ethics shows how far companies are ready to go to keep the Wall Street happy and meet their demands for efficiency.

Tasmia Ansari
Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox