Riot Games recently updated the terms and conditions for Valorant – a free to play hero shooter. The company will record players’ live voice chats to combat disruptive behaviour.
“We know disruptive behaviour using our voice chat is a concern for a lot of players, and we’re committed to addressing it more effectively. In order for us to take action against players who use voice comms to harass others, use hate speech, or otherwise disrupt your experience, we need to know what those players are saying. Which is why, moving forward we’ll need the ability to analyse voice data,” said Riot Games.
According to an ADL report, 81 percent of US adults — aged 18 to 45 — encountered harassment while playing online games in 2020, up from 74 percent the year before. More than two-thirds of online multiplayer players have been subjected to severe forms of abuse, such as physical threats, stalking, and persistent harassment.
Beep the bully
Recently, Intel announced Bleep – an AI tool to censor abusive or derogatory words in chats while playing games. The chip giant developed the AI tool in partnership with Spirit AI, a data science and AI engineering company.
Roger Chandler, VP & GM, Client XPUs Product and Solutions at Intel, said as one of the leaders of PC gaming, it’s Intel’s responsibility to keep its online gaming space safe. Online toxicity, he said, is forcing gamers to quit or is affecting their mental health.
To that end, Intel used the feedback from online gamers to build Bleep.
Natural Language Understanding (NLU) is at the heart of this AI technology, allowing the creation of more sophisticated behavioural classifiers with multiple layers of contextual sensitivity. Spirit AI combined NLU and scanning millions of messages in milliseconds using powerful search tools to optimise the tool’s efficiency. Bleep’s interface is programmed with multiple sliders to allow users to screen hate speech under categories such as “Ableism and Body-Shaming,” “Racism and Xenophobia,” “LGBTQ+ Hate,” “Sexually Explicit Language”, and Swearing “Misogyny.”
The platform uses machine learning models and triaging technology to keep out toxic words from the chat. Multiple algorithmic “gates” are used to classify non-toxic conversations. The algorithm looks at the talk’s content and various characteristics like emotion, frequency, and prosody. Depending on the classification, it provides the user with the options of blocking, muting or initiating a warning. The first few triaging gates run on the user’s device; the rest run on the secured server of the platform. If the need arises, it goes to the moderation team of ToxMod to respond manually and protect the player’s privacy.
Technology alone cannot entirely solve this menace. Gaming companies and professionals worldwide have formed a group, Fair Play Alliance, to address the rise of online toxicity in the gaming world. A few tech giants are also working towards the same goal. For example, Facebook has been working on a new model called XLM-R, a combination of both the XLM and Roberta model – that uses self-supervised training methods to achieve cutting-edge efficiency in text comprehension across multiple languages. This will eventually help to analyse and identify potential hate speech.
Online gaming has exploded in recent years. China had to set a gaming curfew over health concerns. The pandemic has pushed a lot of people to turn to video games. Esports and streaming platforms like Twitch saw exponential growth during the period. With this mass exodus to the online world, it’s essential to keep the space safe and civil. The gaming companies should take initiatives to encourage best practices to maintain the hygiene of the online gaming space. Putting a credit scoring system based on online behaviour will be an excellent place to start. The companies should commit resources to develop machine learning algorithms to moderate the content and curb hate speech on such platforms.