AI Dungeon is a single and multiplayer text adventure game that uses artificial intelligence to generate unlimited content. The game uses GPT-3 to enhance its text-based gameplay. Unlike other video games that require a complex decision tree to script a large number of paths through the game, GPT-3 helps AI Dungeon dynamically generate a change state of gameplay as a response to the players’ typed prompts.
It began as a fun experiment where the sophisticated text-writing algorithms were used to create Dungeons & Dragons-like role-play adventures. Now, the experiment seems to have taken a dark turn. As per a Wired report, users have started to tap GPT-3 to develop abusive, toxic, and dangerous stories. In more extreme cases, some of the game storylines also involved sexually abusing minors.
The Utah-based startup Latitude launched AI Dungeon in 2019. The game demonstrates a unique form of human-machine collaboration. GPT-3 allows players to craft a personalised and unpredictable adventure by just typing out the action or the dialogue they want their characters to perform.
AI Dungeon provided unfettered access to the GPT-3 technology. In December 2019, when the game launched using the text-writing algorithm’s earlier open-source version, it quickly racked up 100,000 players. Last year, OpenAI gave early access to GPT-3 technology’s more powerful commercial version. OpenAI upheld AI Dungeon as an example of the commercial and creative potential of GPT-3 algorithms.
However, soon players started exploiting GPT-3 to develop sexually explicit and abusive ‘adventure’ games. Some players complained the algorithm itself would bring up sexual themes, making them ‘deeply uncomfortable’.
Latitude’s team was quick to acknowledge the problem. While it supports the freedom and creativity offered by such AI-powered games that allow users to create imaginative and unique experiences, it stands against abusive content, the company said in a blog. Latitude reiterated it has ‘zero tolerance for sexual content involving minors’.
The company has released a test system to prevent sexual content that flouted company policy. While it would continue to support not safe for work (NSFW) content that involves consensual adult content, violence, and profanity, it would prevent the use of AI Dungeon to create child sexual abuse material. “This means content that is sexual or suggestive involving minors; child sexual abuse imagery; fantasy content (like “loli”) that depicts, encourages, or promotes the sexualisation of minors or those who appear to be minors; or child sexual exploitation,” the blog noted.
The company has announced three measures:
- Improving AI-based feedback system.
- Enabling users to report false positives to limit the impact of the restrictions on other types of content.
- Inform moderators beforehand when platform changes are being implemented.
That said, a few players reported the new system test contained a security flaw that made every story generated to be publicly accessible.
GPT-3 was the breakthrough innovation of 2020. With 175 billion parameters, it became the largest language model to be ever developed then. In contrast, its predecessor, GPT-2 had just 1.5 billion parameters.
GPT-3 researchers cautioned that it had the potential to ‘advance both the beneficial and harmful applications of language models’. They had said that the misuse could be in the form of spamming and phishing attacks, fraud, abuse of legal process, and social engineering pretext.
The team also noted that since the model derives its text generation capabilities mainly from the internet, it carries an inherent risk of bias.