Booting Trump didn’t set a precedent. From Yelp to Uber to Airbnb, platforms regularly ban users and content, but too often behind the scenes.
Donald Trump’s accounts have been banned on Twitter, Facebook, and a host of other platforms. Every last one of @realdonaldtrump’s 47,000 tweets vanished from the site in an instant, from the birther lies and election conspiracy theories to the 2016 taco bowl tweet. In an explanatory blog post, the company cited the attack on the Capitol and “the risk of further incitement of violence” that might occur by permitting further Trump tweets. His multiplatform removal has drawn cheers from many, as well as the ire of more than a few Trump supporters. The bans have also raised concerns that the companies had gone too far in exercising their power to shape what users see.
As it turns out, Trump is far from alone in having his content deleted by a tech company. With surprising regularity, online platforms flag or remove user content that they deem to be objectionable. Twitter’s recent ban of 70,000 accounts associated with QAnon mirrors other initiatives the company has had to combat extremist groups. It has banned well over a million accounts associated with terrorist groups, including a large set associated with the Islamic State. In the first half of 2020 alone, Twitter suspended roughly 925,000 accounts for rules violations.
While some content removal can be perceived as a matter of safety or national security, the practice occurs in much more mundane situations as well. Yelp (where I have done consulting in the past), for example, has gathered hundreds of millions of local business reviews, and it has been shown to impact business outcomes. Its popularity has created new challenges, including fake reviews submitted by businesses in disguise trying to boost their online reputation (or to pan competitors). To combat review fraud, Yelp and other platforms flag reviews they deem spammy or objectionable and remove them from the main listings of the page. Yelp puts these into a section labeled “not currently recommended,” where they are not factored into the ratings you see on a business’s page. The goal of approaches like this is to make sure people can trust the content they do see.
In a 2016 paper published in Management Science, my collaborator Giorgos Zervas and I found that roughly 20 percent of reviews for Boston restaurants were getting pulled off Yelp’s main results pages. Platform-wide estimates show even higher rates of content removal, some 25 to 30 percent of all reviews aren’t shown on businesses’ main review pages. Yelp is of course not alone in this practice. Tripadvisor and other review platforms also invest in removing reviews that seem likely to be fake.
Online marketplaces also have a history of kicking users off the platform for bad behavior. In a series of papers, my collaborators and I found widespread evidence of racial discrimination on Airbnb. In response to our research and proposals, coupled with pressure from users and policymakers, the platform committed to a broad set of changes aimed at reducing discrimination. One of these steps (which we had proposed in our research) involved creating new terms of service requiring users to agree not to discriminate on the basis of race in their acceptance decisions. The new terms had considerable bite: Airbnb ended up kicking off more than a million users for refusing to agree to it. Uber also has a history of removing users, from drivers who don’t maintain a high enough rating to 1,250 riders who were banned from the platform for refusing to wear a mask during the pandemic.
All of this points to the power of platforms to shape the content we see, and an often overlooked way in which platforms exercise that power. Ultimately, removing content can be valuable for users. People need to feel safe in order to participate in markets. And, it can be hard to trust review websites riddled with fake reviews, housing rental websites rife with racial discrimination, and social media platforms that are megaphones of misinformation. Removing bad content can create healthier platforms in the long run. There is a moral case for banning the president. There is also a business case.
Deciding exactly which content to remove involves subjective assessments, including what it means to be objectionable, and how to penalize objectionable content. There are important limits to how much companies can say about how they make these assessments. If Tripadvisor, for example, explained exactly how it decided whether a review seemed fake, it would become much easier for businesses to game the system. Even so, platforms can too often sweep the details of these decisions under the rug; they’d benefit from being more transparent.
To its credit, Twitter has been transparent about the rationale for Trump’s ban, including a blog post pointing to specific tweets that they viewed as a breaking point. The company provided a plain English summary of which terms of service they thought were violated and why. Twitter CEO Jack Dorsey also tweeted that he does “not celebrate or feel pride in our having to ban @realDonaldTrump from Twitter, or how we got here” and was open about the need for “more transparency in our moderation operations.” This lesson applies to a broader set of platforms: We are living in a world in which content flagging and removal is commonplace–and platforms play a central role in shaping the online content that we consume. These decisions matter. As Dorsey notes, “Offline harm as a result of online speech is demonstrably real.” To build trust for the long run, platforms need to be open about the content they are removing and why they are flagging it.
All Rights Reserved for Michael Luca