What Mark Zuckerberg’s New Vision Could Really Mean for Privacy and Propaganda

Some critics worry the new privacy push is also a way to dodge regulation and avoid moderating content: ‘The devil is in the details’

It’s probably no surprise to Facebook CEO Mark Zuckerberg that his Wednesday blog post calling for “a privacy-focused vision for social networking” was quickly met with skepticism.

“I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever,” he wrote, suggesting greater interoperability to send encrypted messages between Facebook Messenger, Instagram, and WhatsApp and more support for automatically deleting messages.

Zuckerberg acknowledged in the post that Facebook doesn’t have a “strong reputation for building privacy protective services,” an obvious reference to recent scandals like analytics Cambridge Analytica using quiz apps to gain access to user data and reports that Facebook lets people search and target ads based on the phone numbers that users provide for two-factor authentication. (Facebook didn’t immediately respond to an inquiry from Fast Company).

Still, industry observers quickly questioned not just Facebook’s ability to build a private platform, but whether the company had ulterior motives for promoting privacy, such as dodging responsibility for what users share online or linking its networks in a way that could dodge antitrust scrutiny.

Facebook has faced increased scrutiny lately for anti-vaccination posts on its core platform and viral rumors on messaging app WhatsApp that have contributed to deadly violence overseas. Tracking the spread of misinformation and figuring out how to combat it would likely become more difficult if discussions move from public and group posts to encrypted messages readable only by their recipients.

“By implementing end-to-end encryption throughout, FB could plead ignorance as to what their users are doing and potentially circumvent legislation meant to remove the most harmful content from online platforms,” writes Hany Farid, a computer science professor at Dartmouth College who’s studied online disinformation, in an email to Fast Company. “It is possible that there is a less nefarious explanation for this proposal, but given the timing, it is hard for me to see what that might be.”

It’s possible that Facebook might still be able to detect malicious users, like the international propagandists it’s removed from its networks in recent years, without being able to see the content of messages. The company could spot unusual patterns in where people log in versus the people they connect with, or notice large numbers of accounts being created from the same place, suggests Cristian Vaccari, a researcher in political communication at Loughborough University in the U.K.

Users—and algorithms looking to serve up engaging content — not seeing newsfeed indicators like the number of shares and likes a post gets might also slow down viral messages in some cases, he suggests. But, on the other hand, viral content might spread through messaging services with less information about where it originally came from, making it harder for users to study messages with a critical eye.

“As with everything with these platforms, the devil is in the details,” says Vaccari.

Facebook has already taken some steps to curb the spread of rumors on WhatsApp, limiting how widely messages can be forwarded after child kidnapping rumors on the platform apparently spurred on violent lynch mobs that killed more than a dozen people in India, and WhatsApp was reportedly used for unsolicited mass propaganda message blasts in Brazil. But unless users share encrypted messages outside the platform, it’s difficult for Facebook or outside observers to know what’s circulating through the platform.

“When you look at the ways that WhatsApp has been abused and hijacked in India and Brazil, it’s clear that it’s a powerful engine for spreading dangerous propaganda,” says Siva Vaidhyanathan, director of the Center for Media and Citizenship at the University of Virginia. “It’s also clear that there’s not much Facebook can do about that, because all the messages are encrypted. Facebook can’t measure the problem or filter for the problem.”

So far, it’s unclear exactly what steps Facebook will ultimately take to beef up user privacy beyond making encrypted messaging more widespread. Zuckerberg makes clear in his post the company will still be considering details “over the next year and beyond,” and there’s been no suggestion that Facebook would ever shut down traditional feeds on Facebook or Instagram.

Even rolling out encrypted messaging in a way that’s useful and secure for the billions of users on Facebook’s platforms is likely far from simple, says Gennie Gebhart, associate director of research at the Electronic Frontier Foundation. Certain features that might be essential for some users, like enabling unencrypted message backups to services like Apple iCloud, could be a disaster for others who want their messages only stored on their phones in encrypted format, she says. And whether encryption is on or off by default across various services will also likely make an impact. That’s because requiring encryption to be enabled can confuse users and make encrypted messages stick out amid network traffic, suggesting users have something to hide, she says.

“All of a sudden those end-to-end encryption chats stick out like a sore thumb,” she says. “If the whole network is encrypted, a bad actor or government won’t know where to look.”

David O’Brien, a senior researcher at Harvard University’s Berkman Klein Center for Internet and Society and the center’s assistant research director for privacy and security, suggests the blog post could prove similar to a famed memo from Bill Gates to Microsoft staff in 2002 calling for secure and “Trustworthy Computing.” While Microsoft’s security reputation has improved dramatically since that time, changes didn’t happen overnight, and the same may be true of Facebook, he says.

“I think it’s going to take years for this to really pan out,” he says.

For experts concerned about the trade-offs between individual security and regulating abusive content online, that might not be a bad thing, if it means Facebook is more likely to find systems that meet the needs of its users and the public at large.

“This is not the time to move fast and break things,” Farid writes. “This is the time to move slowly and not break (more) things.”

All Rights Reserved for Fast Company

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.