Blame the Messenger

Child exploitation is rampant online, and Facebook’s chat platform is a big part of the problem

Facebook is many things: a social network; a conglomeration of multimedia publishing apps; and now, the world has learned, perhaps the largest mainstream distributor of videos and pictures in which children are sexually abused.

The New York Times on Saturday published a feature detailing an “explosion” in the online distribution of this kind of content, noting that Facebook’s Messenger app in particular was last year “responsible for nearly 12 million of the 18.4 million worldwide reports of child sexual abuse material.” This report, authored by journalists Michael H. Keller and Gabriel J.X. Dance, mentions encryption 13 times, often in reference to Messenger. Facebook said earlier this year it plans to roll out end-to-end encryption on this platform by default, which will make it much harder to detect and stop the proliferation of problematic material of any kind, including footage and images of children being sexually exploited.

For all the references to encryption, the Times story doesn’t go into much detail as to why it actually matters. Essentially, end-to-end encryption makes it impossible for an intervening force, like a government or Facebook itself, to see what people are sending one another. The messages are in effect scrambled, except for the specific people sending and receiving them. Police gathering evidence against someone suspected of child abuse would thus be unable to ask Facebook for access to their encrypted communications on Messenger. By the same token, an oppressive regime would be unable to demand that Facebook provide messaging records from an activist group.

While it isn’t yet the default, Messenger already allows users to enable this kind of encryption via a “secret conversations” feature launched in 2016. The Times did not say whether any portion of the 12 million child abuse reports involving Messenger dealt with messages that were encrypted in this way.

But the story repeatedly suggests that the plan to flip the switch on end-to-end encryption by default is cause for concern, given the gargantuan role Messenger reportedly plays in distributing child-abuse material. It’s a fair enough point: If Facebook can’t sufficiently police its platform, as the story argues, why should the company also prevent outside authorities from intervening? But no one is asking a bigger question: Why should Facebook have this kind of private messaging service at all?

There may always be abusive content on the internet, but companies can revise or remove the platforms on which it thrives. We’ve typically approached Facebook as though its products — like Messenger or the News Feed — are somehow immutable or inevitable. They’re neither, as the tech industry itself has repeatedly proved. When it became clear that 8chan, a message board site associated with hate speech and violence, encouraged mass shootings, Cloudflare pulled its networking support and “deplatformed” the site, knocking it off the internet. Whenever it’s become clear that Facebook is a locus of harmful misinformation, violent livestreams, or child abuse footage, the company has promised iterative updates without fundamentally changing its products.

Focusing on the minutiae of how the platform works won’t lead to a solution if the problem is endemic to the platform itself. And a debate over encryption may never find a satisfying resolution. There will always be people who insist privacy is the greater value measured against any criminal concern on a platform like Messenger, and vice versa. In this case, it is unclear to what extent encryption makes a difference, given the fact that the Times investigation also showed that Congress, the Department of Justice, and law enforcement more generally had utterly failed to marshal the resources needed to fight this problem in the first place.

Technology that connects people will connect all people, including those who would exploit children.

There’s an even more tragic dimension to this story. Statistics cited by the Times point to an exponential increase in reports of child abuse imagery since 1998, when there were 3,000 such reports. (“Just over a decade later, yearly reports soared past 100,000. In 2014, that number surpassed 1 million for the first time. Last year, there were 18.4 million, more than one-third of the total ever reported.”) The increase is clearly correlated with developments in consumer technology, broadband access, and internet speeds that make this material easier to produce and distribute. It may be helpful to remember as a point of context that Instagram didn’t exist until 2010, three years after the release of the first iPhone. Around this time, we entered a new epoch in humankind’s ability to seamlessly record and share images and video across the internet. Technology that connects people will connect all people, including those who would exploit children.

There are only so many ways to control this technology. Sometimes, people are hired to content on platforms like Facebook; the work is unending and traumatic. But moderators can’t access encrypted private conversations. Automated systems could be developed to scan content shared even in private communications, but this is a moonshot. A bot may be able to identify a gun, but the volume and complexity of variables is much higher in footage involving people. It would be extremely difficult to create a program that would successfully and consistently flag a sexually explicit video involving a child versus, say, a clip of a father giving his infant son a bath.

For now, we’re stuck with a more human approach to these problems. And a very sensible human response to this report would be to ask why Facebook should have a massive private messaging network that facilitates child exploitation on such a grotesque scale.

A few easy answers: Facebook has always had private messaging, Messenger is convenient, and the company now has a business imperative to keep the thing going.

Facebook’s official response to the Times article, sent to OneZero in response to a request for comment, attempts to make a virtue out of the last point. “We compete with encrypted messaging services around the world; and as encryption becomes more and more the industry standard, we remain the service best equipped to keep people safe,” a Facebook spokesperson wrote. “We are devoting new teams, sophisticated technology, and enormous resources toward the goal of building the most-safe, private space for people to connect.”

Maybe so. One thing Facebook certainly has achieved is ubiquity: Messenger has 1.3 billion users around the world. In a way, Facebook’s products are their own micro-internet. Child exploitation is certainly an issue outside of Messenger — the Times piece mentions Tumblr and sites on the dark web, for example — but apparently no other company bears as much responsibility for the spread of this content as Facebook. It’s the curse of bigness, perhaps, but also of unification: No single moderation tactic could have previously applied to 1.3 billion people spread across the internet, though it would here.

It may simply be impossible to moderate the content that is exchanged between all of those people. But maybe there’s a simpler, blunter approach. We take for granted that you can send images, links, and videos on Messenger, but what if you… couldn’t? What if we’ve gotten the cost-benefit of being able to send a video on such a large, central platform wrong? Messenger could simply be text-based, as old messaging services were: Easier to moderate automatically, and without the risk of harmful videos or images being distributed. There’s an even stronger argument that the same calculus might be applied to Live videos on Facebook, which have previously allowed people to broadcast shooting rampages and suicides. True, some users would go elsewhere, the content would persist in some fashion, but it would not be supported by the dominant social network. There is a chance, at least, that its creation and distribution would be impeded in some way, especially if other companies followed suit.

Such a drastic change to Messenger might seem extreme. But so is the problem itself. And to imagine that minor tweaks will solve it, at this point, amounts to a form of denial.

All Rights Reserved for moderate

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.