Skip to main content

Ubisoft and Riot are teaming up to tackle toxic online chat with AI, but big questions remain over how it's going to work

"We know this problem cannot be solved in a vacuum," Riot tell us

Do you ever feel like multiplayer games would be better if other players were less abusive? Ubisoft and Riot Games are looking into training artificial intelligence to tackle bad behaviours within in-game chat, a research collaboration they’re calling Zero Harm In Comms. Ahead of their announcement today, I put some questions to Ubisoft La Forge’s executive director Yves Jacquier and Riot’s head of technology research Wesley Kerr to get some more insight on the joint project, and ask them exactly how their proposal will work.

Rainbow Six Siege is one of Ubisoft's core multiplayer games.Watch on YouTube

After reading that, you’re probably wondering “These companies are tackling toxicity? Really?” Ubisoft and Riot have their own histories of alleged inappropriate behaviour within their company cultures. Although both companies have said they're committed to behavioural change, it could prove tough to win over players who are aware of their history. Though still in it's early stages, Zero Harms In Comms is an attempt at co-operating to answer a thorny issue that's relevant across the industry, but it's just one possible response to the issue of disruptive behaviour in chat.

Ubisoft and Riot are already both members of the Fair Play Alliance with an existing shared commitment to creating fair, safe, and inclusive spaces among the wilderness of online gaming, and Zero Harms In Comms is the way they're choosing to try to handle the issue of toxicity in chat. The companies didn't specify whether their research will cover text or voice chat, or both, but they say they’re aiming to “guarantee the ethics and privacy” of the initiative.

Ubisoft and Riot are hoping their findings can be used to create a shared database for the whole games industry to gather data from, and use that to train AI moderation tools to pre-emptively detect and respond to dodgy behaviour. To train the AI that’s central to the Zero Harm In Comms project, Ubisoft and Riot are drawing on chatlogs from their respective diverse and online-focused games. This means their database should have a broad cover of the types of players and behaviours it’s possible to encounter when fragging and yeeting online. AI training isn't infallible of course; we all remember Microsoft's AI chatbot, which Twitter turned into a bigot within a day, though that's admittedly an extreme example.

The Zero Harms In Comms project began last July. “This is a complex topic and one that is very difficult to solve, not to mention alone,” Jacquier tells me. “We’re convinced that, by coming together as an industry, through collective action and knowledge sharing, we can work more efficiently to foster positive online experiences.” Jacquier initially approached Kerr on behalf of Ubisoft because the two had worked together before on growing Riot’s investment in tech research. Jacquier and Kerr established two objectives for the research. The first of these is to create a GDPR-compliant data-sharing framework that protects privacy and confidentiality. The second is to use the data gathered to train cutting-edge algorithms to more successfully pick up “toxic content”.

Photos of Riot Games' Wesley Kerr and Ubisoft La Forge's Yves Jacquier
Riot Games' Head Of Technology Research Wesley Kerr (left) and Ubisoft La Forge’s Executive Director Yves Jacquier (right)

Riot feel working with Ubisoft broadens what they can hope to achieve through the research, Kerr tells me. “Ubisoft has a large collection of players that differ from the Riot player base,” he says, “so being able to pull these different data sets would potentially allow us to detect the really hard and edge cases of disruptive behaviour and build more robust models.” Ubisoft and Riot haven’t approached any other companies to join in, so far, but might in the future. “R&D is difficult and for two competitors to share data and expertise on an R&D project you need a lot of trust and a manageable space to be able to iterate,” Jacquier says.

I asked Jacquier and Kerr to define what they’d consider to be disruptive behaviour in chat. Jacquier tells me that context is key. “Most commercial services and tools have strong limitations: many are based on dictionaries of profanities that can easily be bypassed,” he says, “and that do not take in account the context of a line. For example, in a competitive shooter, if a player says 'I’m coming to take you out' it might be part of the fantasy and therefore acceptable, while it might be classified as a threat in another game.” The researchers will be trying to train up AI to glean that context from chat, but acknowledged that they’ve set themselves an incredibly complex task. Kerr points out that behaviours can vary across “cultures, regions, languages, genres, and communities”.

As stated, the project revolves around AI, and improving its ability to interpret human language. “Traditional methods offer full precision but are not scalable, Jacquier tells me. “A.I. is way more scalable, but at the expense of precision.” Kerr adds that, in the past, teams have based their efforts on using AI to target specific keywords, but that’s always going to miss some disruptive behaviour. “With the advancements in natural language processing and specifically some of the more recent large language models,” he says, “we are seeing them be able to understand more context and nuance in the language used rather than simply looking for keywords.”

Project U is an upcoming session-based co-op shooter from Ubisoft.
Project U is an upcoming session-based shooter developed by Ubisoft.

Jacquier assures me that privacy is a core tenet of the research. “These data are first scrubbed clean of any Personally Identifiable Information and personal information and then labelled by behaviour,” he says, “for instance: totally neutral, racism, sexism, etc.” The data are then passed by the AI to train it to understand potentially disruptive behaviour when it spots it. These AI are Natural Language Processing (NLP) algorithms, which Jacquier tells me can detect 80% of harmful content compared to a 20% success rate for dictionary-based techniques.

Kerr breaks the process of gathering and labelling data to train these NLP algorithms down a bit more for me. “The data consists of player chat logs, additional game data, as well as labels indicating what type of disruptive behaviour is present if any,” he says. “Many of the labels are manually annotated internally and we leverage semi-supervised methods to add labels to examples where our models are quite confident that disruptive behaviour has occurred.” To pick up disruptive behaviour as successfully as possible, the NLP algorithm training will involve “hundreds or thousands of examples”, learning to spot patterns among them.

Of course, another elephant in the room here is the player. Any time we go online, we open ourselves up to the risk of bad interactions with other people, anonymous or otherwise. I asked Jacquier and Kerr how they thought players would react to AI judging their in-game convos. Jacquier acknowledged that it’s just a first step to tackling toxic spaces in the industry. “Our hope is that our players will gradually notice a meaningful positive shift in online gaming communities where they see less disruptive behaviour,” he said. Kerr added that he hoped players can understand that it takes time for projects such as Zero Harm In Comms to change behaviour in a meaningful way. Maybe players could just try being nice to each other, as former Overwatch director Jeff Kaplan once suggested?.

League of Legends game promotional artwork
Online games such as League Of Legends are Riot Games' bread and butter.

Although neither Jacquier nor Kerr discussed what will actually happen to players once their AI-based tools have detected disruptive behaviour, the eventual results of the Zero Harm project “won’t be something that players see overnight”. The research is only in the early data-gathering phase, and a way off from entering its second phase of actually using that data to better detect disruptive behaviour. “We’ll ship it to players as soon as we can,” Kerr tells me. Zero Harm In Comms is still in its infancy, but both Ubisoft and Riot hope the research will eventually have far-reaching, and positive, results to share with the entire games industry and beyond. “We know this problem cannot be solved in a vacuum,” Kerr says, and Jacquier agrees: "It’s 2022, everyone is online and everyone should feel safe."

That said, it’s not yet certain whether the research project will even have anything meaningful to report, Jacquier points out. “It is too soon to decide how we will share the results because it depends on the outcomes of this first phase,” he says. “Will we have a successful framework to enable cross industry data sharing? Will we have a working prototype?” Regardless of how the project turns out, the companies say they’ll be sharing their findings next year.

Read this next