How to Keep Child Predators Out of Virtual Playgrounds Like ‘Fortnite’ and ‘Minecraft’

There’s no perfect fix, but experts on moderation, sexual exploitation, and internet law tell OneZero there is hope

Online games that are wildly popular with kids, like FortniteMinecraftClash of Clans and Roblox, have become hunting grounds for pedophiles. Recent reporting suggests that by at least the thousands — and perhaps far more than that — kids are being groomed, cajoled, tricked, intimidated, or blackmailed into sending sexual images of themselves or others to predators who trawl gaming platforms and chat apps such as Discord for victims.

While there is often an element of moral panic at play when a new trend or technology poorly understood by adults is identified as a threat to children, an investigation by the New York Times made clear that the problem is real, widespread, and growing. If you’re still skeptical, a firsthand account of what it’s like to hunt predators from an employee at Bark, a startup that develops parental controls for online apps, illustrates just how pervasive the problem is on social media and messaging platforms. It’s impossible for a parent to read these stories without coming away alarmed about kids’ safety and well-being in the world of online gaming.

What the reporting so far has not made fully clear, however, is what can actually be done about the threat. Most of the companies mentioned in the Times story — including Roblox, Discord, Sony (maker of Playstation), and Microsoft (maker of Xbox and Minecraft, among others) — pointed to at least some measures they have put in place to protect underage users, but few could demonstrate meaningful success. Nearly every approach discussed had obvious shortcomings. And Epic Games, maker of Fortnite, didn’t even respond to requests for comment, from either the Times or OneZero. Supercell, maker of Clash of Clans, did not respond to OneZero, either.

Experts on content moderation, online sexual exploitation, and internet law told OneZero that there is hope for addressing the problem in meaningful ways. It just won’t be easy — and some argue it won’t happen without changes to the bedrock law that shields online platforms from many forms of liability.

Part of what makes policing gaming so tricky is that the interaction between predators and kids rarely stays on the gaming platforms. Often, the predators find and form a relationship with kids on a gaming site, only to move to more private chat apps such as Facebook Messenger or Kik to trade sexual images and blackmail them. Sarah T. Roberts, assistant professor of information studies at UCLA, compared unmoderated gaming chats to “a virtual playground with creeps hanging out all around it” — and no parents are present.

“I’m feeling more optimistic about what feels like kind of a piecemeal approach.” — Kat Lo

You can imagine multiple approaches to guarding such a venue. One would be to bring in responsible adults to watch over it — that is, moderation. Another would be to install surveillance cameras — providing oversight via automation. A third approach would involve checking the identities or ages of everyone who enters the playground — a category I’ll call verification. A fourth would be to make sure all the parents and kids in the community are aware of the playground’s risks, and to help them navigate it more safely — i.e., education. Finally, society could introduce laws restricting such playgrounds, or holding their creators liable for what happens on them: regulation.

If there were only one virtual playground, there might be a single correct answer. But there are many, and their features are distinct, making it impossible to craft a single effective policy. “I don’t see a grand technical solution,” said Kat Lo, an expert on online moderation and project lead for the Content Moderation Project at the nonprofit Meedan, which builds open-source tools for journalists and nonprofits.

That doesn’t mean the situation is hopeless. “I’m feeling more optimistic about what feels like kind of a piecemeal approach,” Lo added. The fixes that she, Roberts, and other experts suggested can be roughly divided into the five categories I outlined above.

1. Moderation

Perhaps the most obvious way to police a digital space is to bring in human oversight, whether by deputizing users as moderators (as Reddit does) or hiring contractors or employees to do the job. But putting an employee in every chat is costly for tech companies whose businesses are built on software and scale — and to users who want to talk smack or coordinate their gameplay without a hall monitor looking over their shoulders.

Moderation also doesn’t make much sense in the context of platforms specifically built for private messaging. Many massively multiplayer online games, such as Blizzard’s World of Warcraft, offer players the ability to privately message any other player at any time via a feature called “whisper.” Epic Games’ Fortnite lets you privately message players on your friend list, and offers small-group text and voice chats. Roberts suggested that such platforms “move to a model of less private messaging and more group-oriented messaging,” with monitoring by community managers. While access to private spaces is important to children’s development, she said, there’s no reason gaming platforms need to be among these spaces.

Of course, moderation is far from a perfect solution. Just ask Facebook, whose contracted moderators endure difficult work conditions and struggle to consistently apply its rules. There’s also the risk that limiting private messaging on a gaming platform such as Minecraft simply pushes users onto an even more private platform for chat, such as Discord. But given that a perfect solution doesn’t exist, more moderation in gaming chats would be a good start — assuming you can get the platforms to do it. (We’ll get to that challenge farther down.)

There’s also the option of simply limiting or removing chat features. Hearthstone, a Blizzard game released in 2014, allows players to communicate with matched opponents only via a menu of preset messages. The company told the gaming site Polygon at the time that the goal was “to keep Hearthstone feeling fun, safe and appealing to everyone.”

2. Automation

Fortnite has some 250 million registered players, of whom more than a million may be online at any given time. Anytime scale is part of the problem — in this case because there are far too many people playing massively multiplayer online games at once for humans to watch over everything they say to each other — it’s worth at least considering whether automation could be part of the solution. And as it turns out, automation already is part of the solution, in some settings. It’s just overmatched.

Since 2009, Microsoft has released a free tool called PhotoDNA that scans still images for matches with known examples of child pornography, and it has been widely used by other companies since then. Last year, it expanded the tool to include video. And last week, the company announced that it is releasing new technology called Project Artemis that uses machine learning to scan online conversations for indicators of child grooming and flag suspicious ones for human review. Microsoft developed the technology in collaboration with fellow tech companies, including Roblox and Kik, as well as the nonprofit Thorn, and will make it available for free to other platforms.

Roblox, for its part, told OneZero it already applies filtering tools to all chats on its platform, with extra restrictions for children under 13. The filters block crude language, but also attempt to detect and block when a player is trying to lure another player off the game and into private communication, such as by asking for their phone number. Project Artemis will add another layer to Roblox’s systems, a spokesperson said.

Meanwhile, Facebook has been building its own machine-learning software, including a tool that tries to identify examples of predators grooming children — that is, befriending them for purposes of soliciting sexual images later. Companies that use this sort of software generally partner with the National Center for Missing and Exploited Children, or NCMEC, to report material to law enforcement. But as the Times reported in a separate investigation of child pornography on platforms such as Facebook Messenger, NCMEC itself has been overwhelmed in recent years by the volume of reports.

Image-scanning tools such as PhotoDNA tend to be less applicable on gaming platforms because sexual images tend to be exchanged on more private messaging services. An approach like Project Artemis that analyzes chats for suspicious patterns of speech could hold promise for gaming platforms, Lo said — but it depends on the context. In some online settings, that sort of monitoring might be viewed by users and perhaps regulators as an invasion of privacy. If the system isn’t sufficiently sophisticated and constantly updated, predators will learn how it works and use coded speech to get around it. And it could be skirted altogether on a platform such as Discord, which allows users to chat by voice as well as text.

Attempts to log and analyze voice communications in some settings may be constrained by privacy laws, noted Jeff Kosseff, assistant professor of cybersecurity law at the U.S. Naval Academy and the author of a book about Section 230 of the Communications Decency Act, a foundational piece of internet law. Additionally, if tech companies work too closely with law enforcement in monitoring their users, Kosseff said, those efforts could run afoul of Fourth Amendment restrictions on warrantless searches by the government. Gaming companies looking to do this sort of monitoring have to do so independently, and properly notify users to obtain their consent, such as through their terms of service.

Implementing this sort of A.I. can require resources that smaller game studios lack. That’s where industry cooperation and standards could help, Lo said. But such systems must constantly evolve, or predators will quickly learn what they’re looking for and deploy evasive strategies, such as coded language, to avoid detection. Even the most sophisticated systems probably can’t match the effectiveness of trained human moderators, Lo added. There’s still value in automated detection as a first layer — like flagging suspicious interactions for human follow-up — but only if it functions as part of a more comprehensive approach.

All Rights Reserved for Will Oremus

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.