The new text filters have different levels of security that can be applied.
I was in the middle of an interview with Dave McCarthy, Corporate Vice President of Xbox Operations last week, when I asked if I could show him an image from my Instagram account.
The image — also above — is pretty simple. It’s a hastily captured screenshot of a message I’d received over Xbox Live from an account named “Idid911nigrssss”. The message itself? “Ur a f**g”.
I can only assume the user meant to call me a flag.
Hidden away in a secondary inbox in my messages area, the message first warned me it likely contained offensive content — specifically saying, “Potentially offensive message hidden” — and asked if I actually wanted to read it or not. As an active gay-identifying gamer of many, many years, I was used to messages like this and clicked through to read out of interest. I asked if this new message functionality was using new safety technologies I assumed Dave wanted to tell me about.
“Wow, you put that on a tee for me to knock off,” McCarthy responded with a laugh.
“Yeah. We are starting to do proactive text filtering… in addition to the 24-7 ’round the globe moderation that we do from a sort of a reactive perspective. We want to augment that more and more. We want to augment, quite frankly, human intelligence with artificial intelligence and machine learning that allows us to proactively filter some stuff out before it even hits people. So we’ve been doing that for a while in certain places on Xbox Live already.
“This is us for the first time going to text chat and applying a model there that will allow you to decide as a user what level of content filtering you want on messages that hit your inbox. We’ve created a primary and a secondary inbox; primary is for your friends and people you’re actively engaged in chats with, and secondary is for the random people that kind of reach out to you. You’ll be able to customise between four different levels of — let’s call it cleanliness — on the messages and decide what you want to see. It goes anywhere from, you know, totally clean and friendly to entirely unfiltered. You can decide based on this scenario what filter you want to have in place.”
So in my own example, my controls limited offensive chats from directly getting to me and provided the option for me then go and read if I so chose. If I eventually get sick of being called a fag — because, honestly, that’s the kind of messages I get — I can change settings and keep those messages from ever being revealed.
“You’ll actually be able to even control the setting for whether you want that warning to be clickable or not, because maybe you don’t want the temptation to know what’s underneath it and maybe you’re like, ‘Hey, it’s better that I don’t know,'” McCarthy explained. “And then you can actually block that ability for yourself. It is blocked for a child or a teen, by the way. So only an adult could change that setting for them to actually be able to go see what the messages.”
The potential downside to limiting all messages to that level partly protects the offending user to some degree. Sending a message calling someone a fag won’t automatically report the offender, though a recipient can still report as normal. In my example, I could report the user for his gamertag and also for his message to me.
“All of our current reporting capabilities are in place,” McCarthy told me. “So if you see anything as a user, you instantly have that sort of one-button ability to report an issue. And when you do, we’ll still go investigate. So the fact that these filters running don’t stop that we still invest heavily in that side of things.”
I mentioned to McCarthy that having to read messages in order to then report a user wasn’t an idea situation, and he agreed.
“Right, right,” he said. “First of all, obviously, getting feedback and reports from our community is hugely valuable. So we want to try and make that easy as possible for people. But we also understand from talking to our gamers that there is a burden over time and people start to do coping mechanisms. Maybe they don’t turn on their voice on Xbox Live because they worry that somebody will detect or infer they’re from a certain diverse community that is going to get harassed. That is just not acceptable. That’s not the gaming environment we want to have — it’s not in our ‘gaming for everyone’ spirit.
“So we can augment the effort of individuals who report, who go out there and do the right thing with 250,000 strong people and then put this additional technology in place to avoid those harms being inflicted in the first place for people. I do think collectively that’s gonna lead to a healthier place and we’re all part of this solution. I’m really happy that we’re adding this capability to the, the stack of things we can use to make it the environment we all want it to be.”
McCarthy then explained that the systems are ever evolving and are meant for bigger and better things than we’ll see today.
“We [already] do scanning of custom images for things like gamerpics that people try and upload that goes through a service called PhotoDNA, which is actually a hashed database system where we can track for terrorist content,” he said. “[And] what is really cool for us [with text filtering] is that it is customizable at a personal level for you as an individual. We run these language learning models in the background that constant update across, like, 21 languages. This is the start of a broad rollout across Xbox Live. It won’t just go to your text messages in your inbox. We’ll actually slowly roll it out to other areas of the service as well. Ultimately our ambition is to apply similar sets of technology to other types of content filtering for communication as well.”
You can check out an overview of the new message safety settings below or reading Xbox Wire. On console, you can adjust your text filtering levels by going to Settings > General > Online safety & family > Message safety.