MITB Banner

Meta’s AI Has Gone Haywire, It’s Not the First Time

Meta's AI stickers have gone viral on the internet — but not in a good way

Share

Listen to this story

Less than a fortnight ago, Meta announced AI-generated chat stickers at its annual Connect event, alongside an AI-powered image editor for Instagram. Relying on the company’s homegrown Llama 2 large language model (built in collaboration with Microsoft), the AI feature can create “multiple unique, high-quality stickers in seconds” for the users by prompting it in English. The recently piloted AI stickers have gone viral — but not in a good way.

In the press release, Meta said that “billions of stickers” are sent on its platforms monthly, giving users billions of opportunities to generate anything they want. 

The new feature, however, is being misused in various ways—generating Nintendo character holding a rifle, naked Canadian president bending over, a busty Karl Marx in a dress and much worse. The reason is that the brain behind the feature, Emu lacks filters allowing users to prompt it with controversial phrases and images. 

As per the research paper, Emu which stands for ‘expressive media universe’ is “a quality-tuned latent diffusion model that significantly outperforms a publicly available state-of-the-art model SDXLv1.0 on visual appeal,” the company stated in the release blog. 

It looks like in the rush to launch the hottest AI tools, Meta as well as other giants like Microsoft continue to forget that people will always use technology for chaos.

Not The First

Unsurprisingly, this is not a first-of-a-kind situation. Earlier this year, Meta released Galactica, a science research-specific language model which had to be taken down just three days after the release. According to MIT Technology Review, the tool designed to assist scientists with relevant scientific compositions was taken down because it is “a mindless bot that cannot tell fact from fiction.”

“The people who made the demo had to take it down because they just couldn’t take the heat,” Yann LeCun, Meta’s chief artificial intelligence scientist, had commented

Similarly, last year, the company released Blender Bot, an AI chatbot that anyone in the US can talk with. Immediately, users all over the country started talking about the uncomfortable content the AI was spewing including racist stereotypes and conspiracy theories. To date, the tools remain available only to US users as per the official page.

In 2016, in a visionary attempt at making machines understand human language, Microsoft released Tay, a bot which (also) turned out to be an awry example of ways AI can go wrong. In less than 16 hours of its arrival on Twitter, Tay had turned into a brazen anti-Semite and was immediately taken offline for re-tooling.

Today, seven years later, the image-conscious company grapples with the same problem, it’s just that the tool in question is different. 

Microsoft Bing’s Image Creator was launched in March, for users to generate images via AI. Even though the tool has a long, long list of filtered words and phrases, people have found a way to surpass them and produce pictures of their beloved fictional characters engaged in violence and terrorism. 

“Most generative AI models today with strict filters and terms of use ‘are playing a game of semantic whack-a-mole’ 404 media recently stated in a piece focusing on how Bing is creating images of 9/11 attacks. ‘Microsoft can ban individual phrases from prompts forever, until there are no words left, and people will still get around filters,” Samantha Cole noted. 

Some companies are making models without any filters whatsoever, and releasing them into the wild as permanently-accessible files.

Felt Cute, Might Delete Later

Tech companies’ lack of commitment to transparency, commercial objectives, and the free scraping of content from the web, is visible through their latest releases. It won’t be long before Google’s Bard which has recently been upgraded to join the infamous group. 

It looks like Meta has learnt from previous AI tomfoolery since it has pursued a limited rollout of AI-generated stickers. That way the team can address the issues and correct abuse before it spreads further to the masses.

Despite the efforts, the models released today cannot fill in the blanks which humans can. The customisable stickers definitely seem cute, but the company appears to have overshot the goal of “enabling news forms of connection and expression” through AI.

Due to the limited rollout of AI stickers, Analytics India Magazine was not able to replicate it or attempt to generate new examples.

Share
Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India