Skip to main content
If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

New tech allows AI to detect toxicity in voice chat, but I think humans might be too smart for it

It's not that I don't have faith in the tech, my issue is with the humans it's designed to catch out

Toxicity in games is no fun, and in this year of our lord 2020, there seems to be a growing trend of using artificial intelligence to find and deal with toxic players. I don't just mean in text chat either; the companies Modulate and FaceIt have both created AI that can supposedly detect toxicity in voice chat from the way that someone says something.

Part of me that feels like this is a good idea. Having a way of quickly and easily getting rid of them is great. However, I've heard one too many stories about AI learning to be racist, so I do wonder if it's the best sort of tech to put in video games.

Last week, Modulate revealed a new AI-powered moderation tool called ToxMod. It uses machine learning models to understand what players are saying, and how they're saying it to recognise if someone is being nasty in voice chat. The idea is, if someone says a swear, these AIs can tell if it's a mean swear or well-meaning swear (think "Fuck you!" vs "Fuck yeah!").

Similarly, FaceIt recently announced that their anti-toxicity AI admin system, Minerva, can now police voice chat too. They run a platform for third-party tournaments and leagues in games including CS:GO, Rocket League and Dota 2, and claim Minerva's already detected more than 1.9 million toxic messages and banned over 100,000 players on it. The big news is that Minerva is now able to analyse full conversations among players and detect potential toxic voice chat, as well as annoying repetitive behaviours and sounds. It's impressive tech, for sure, but I can't help but wonder how well it would work were these sorts of AI more commonplace.

To preface this, I think my scepticism is mostly because of humans. With ToxMod's tech, if players know an AI is listening to their tone of voice, they could just say horrible things in a nice way. If we left everything to automoderation systems like this, there are plenty of smart yet dreadful people that could still be seen as polite players and not get caught out.

CS:GO - Best PC Games 2020

If an AI's machine learning is dynamic, then it's learning directly on the job from humans who can be, let's be honest, very manipulative. But if it's learning from data fed to it by a human, then it could just as easily absorb their biases.

That's not to say I think all AI and machine learning is stupid or anything, but it is a bit like teaching an alien (or a toddler) how humans are supposed to act. There are a fair few examples of AI learning odd and straight-up bad behaviours. One of the most famous ones was Tay, the Microsoft chatbot who learned to be racist on Twitter (from people spamming it with racist stuff). A more serious case involved American software designed to perform risk assessments on prisoners, which made mistakes in labelling black people as likely reoffenders at twice the rate it did white people (which it learned from data given to it). Video games are, obviously, a lot less serious than that - but in a similar vein, I feel like there's a possibility some sort of game AI could teach itself (or indeed, be taught) that certain accents or dialects sound more mean-spirited than others. I'm not saying all these cool AIs are going to end up racist - but! Historically, we're not great at teaching them not to be.

I play a bunch of games where there are a lot of vocal goons eager to spout their nasty opinions. I don't bother with voice chat in the likes of Overwatch or Valorant. I've experienced all the dumb sexist comments, and they don't even particularly phase me anymore (the sorts of people that say that shit have like, two jokes), I'm just bored of it. What I've learned from this, however, is if people wanna be dickheads to you and can't do it with their voice, they will find a way. Griefing, team killing, throwing matches, staying AFK - some people are just too determined to make video games suck.

AI is cool tech, and if it works as intended, great! But it seems pretty advanced when basic reporting and blocking systems in many games don't work as well as they should in the first place. To me, players calling out other players seems to be the best way of dealing with toxic behaviour, but it needs to work well hand-in-hand with automated systems to do away with the nasties for good. Building more robust reporting systems so we can do even better this would be my go to. Unfortunately, I think that lots of humans are too smart and too mean to be caught out by artificial intelligence alone.

Watch on YouTube

Read this next