AIM Banners_978 x 90

Time for OpenAI to Open Source Toxicity Detection?

Open-sourcing toxicity filtering models would bring huge social benefits and help all open-source LLMs become less toxic
 If you have followed the AI space closely over the past few years, you would have encountered many instances where AI took violent, sexist and racist barbs at people. This happens because most of the data used to train these AI models is sourced from the web, and often has toxic content.  Take for instance GPT-4Chan, where YouTuber Yannic Kilchner created an AI chatbot and trained it on three years’ worth of posts from 4chan, the repulsive cousin of Reddit. Kilchner fed the bot threads from the politically incorrect /pol/board, a 4chan message board notorious for racist, xenophobic, and hateful content. Obviously, the chatbot spewed unparliamentary content as a result. Hence, it gets crucial to remove toxicity from AI models.  OpenAI, the company behind ChatGPT, has mana
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Pritam Bordoloi
Pritam Bordoloi
I have a keen interest in creative writing and artificial intelligence. As a journalist, I deep dive into the world of technology and analyse how it’s restructuring business models and reshaping society.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed