MITB Banner

Is GPT-4chan the worst AI ever?

A condemnation letter against a single independent researcher smells of unnecessary pitchfork behaviour.
Share
Listen to this story

YouTuber and DeepJudge CTO Yannic Kilchner created an AI chatbot called ‘GPT-4chan’. The bot was trained on three years’ worth of posts from 4chan, the repulsive cousin of Reddit.

Kilchner fed the bot threads from the Politically Incorrect /pol/ board, a 4chan message board notorious for racist, xenophobic, and hateful content. The bot sparked a heated debate on social media before it went offline. 

Recently, the AI community launched a petition ‘Condemning the deployment of GPT-4chan.’ The petition stated: “Unfortunately, we, the AI community, currently lack community norms around their responsible development and deployment. Nonetheless, it is essential for members of the AI community to condemn clearly irresponsible practices.”

GPT-4chan is a large language model trained on approximately 134.5 million posts from the Politically Incorrect /pol/ anonymous message board. Kilchner developed the model by fine-tuning GPT-J with a previously published dataset to mimic the users of 4chan’s board. He posted about GPT-4chan on his YouTube channel and called it the ‘Worst AI ever.’

“The model was good, in a terrible sense … It perfectly encapsulated the mix of offensiveness, nihilism, trolling, and deep distrust of any information whatsoever that permeates most posts on /pol/.”

He claimed his model is more truthful than any other GPT model out there.

Kilchner said the bot has posted around 30,000 times on 4chan before being taken down and posted more than 1,500 times in a span of 24 hours.

The model was downloaded over 1,400 times, and the links were available on Twitter, Hacker News, and Reddit. Kilchner even created a website (not accessible anymore). He also published the codes on Github. He said the idea to create GPT-4chan came to him after Elon Musk claimed that the number of bots on Twitter is much higher than the official number (5 percent). 

Condemning GPT-4chan

The model was hosted on Hugging Face as well. 

Initially, Hugging Face limited access to the model before removing access to the model altogether. “Hugging Face as the model custodian (an interesting new concept) should implement an ethics review process to determine the harm hosted models may cause, and gate harmful models behind approval/usage agreements. Open science and software are wonderful principles but must be balanced against potential harm. Medical research has a strong ethics culture because we have an awful history of causing harm to people, usually from disempowered groups,” said AI safety researcher Dr Lauren Oakden-Rayner.

So far, the petition has been signed by more than 200 members of the community, including Yoshua Bengio, Full professor at Université de Montréal, Sam Bowman, Assistant Professor, NYU and Jonathan Berant, Associate Professor, Tel Aviv University.

“Yannic Kilcher’s deployment of GPT-4chan is a clear example of irresponsible practice. GPT-4chan is a language model that Kilcher trained on over three million 4chan threads from the Politically Incorrect /pol/ board, a community full of racist, sexist, xenophobic, and hateful speech that has been linked to white-supremacist violence such as the Buffalo shooting last month,” the petition said.

However, not everyone is on board with the petition. Dustin Tran, Senior Research Scientist at Google Brain, said, “I’m against GPT-4chan’s unrestricted deployment. However, a condemnation letter against a single independent researcher smells of unnecessary pitchfork behaviour. Surely there are more civil and actionable approaches.”

Two sides to a story

Much of the debate on social media with GPT-4chan has been about how harmful models like GPT-4chan can wreak havoc. The biggest concern was that GPT-4chan could pave the way for more AI bots being developed to spread racist and hateful messages online without any human intervention. 

Secondly, the model could target vulnerable people with harmful messages that could lead to self-harm. Also, models such as GPT-4chan could be weaponised to spread misinformation. However, Kilchner has defended his model on social media claiming there were no documented incidents of GPT-4chan causing harm to anybody.

https://twitter.com/ykilcher/status/1533917117002694657

That said, there are two sides to a story. A section of social media came to Kilchner’s defence, arguing the model is not inherently harmful and could be used for good. For example, the model could be leveraged to combat hate speech.

PS: The story was written using a keyboard.
Share
Picture of Pritam Bordoloi

Pritam Bordoloi

I have a keen interest in creative writing and artificial intelligence. As a journalist, I deep dive into the world of technology and analyse how it’s restructuring business models and reshaping society.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India