Future Tense

Social Media Fact-Checking Is Not Censorship

Despite what Trump says, Twitter using a pop-up to flag incorrect information is preserving free expression, not suppressing it.

Donald Trump with a blue Twitter logo and blue exclamation marks on top of his photo.
Photo illustration by Slate. Photos by Saul Martinez/Getty Images and Twitter.

Last Tuesday, President Donald Trump went to war with Twitter because the company appended a fact-checking link to two of his tweets. There was no account timeout, no takedown of tweets, no suspensions—only a link to a fact check of Trump’s false claims about California’s mail-in ballots. Nevertheless, the little blue annotation triggered an outcry of anti-conservative bias from the president, his advisers, and prominent supporters, and the outrage machine went into overdrive. The president declared that it was time for him to take on social media in the interest of “FAIRNESS.” On Friday, he signed an executive order declaring that platforms would be required to demonstrate “good faith” in moderation decisions, under some definition of the term to be established by the Federal Communications Commission, if they wanted to keep their legal protections under Section 230 of the Communications Decency Act. (CDA Section 230 states that platforms are not liable for the content that third-party users post to their platforms, with some carefully enumerated exceptions. The law, as author Sen. Ron Wyden has explained it, gives platforms a “sword and a shield”—a sword that allows them to moderate and a shield that protects them from liability.) The order additionally reopened a complaint form for users to submit alleged moderation-bias grievances and put state attorneys general and the Federal Trade Commission in charge of assessing whether user complaints indicated that platforms were guilty of unfair practices.

The reframing of a fact check as evidence of anti-conservative bias is deeply problematic, because right now we need to see more correcting of misinformation, not less. This has become abundantly clear in the context of COVID-19. In May, for example, a video called “Plandemic” went wildly viral among certain communities on Facebook. The video was a 25-minute daisy chain of misinformation and outlandish allegations. Some of the claims were standard government-conspiracy fare, but it also alleged that oceans were full of “healing microbes,” and it gave specific advice to avoid masks. That viral spread of that video, which my team at Stanford studied extensively, showed how broadly sensational misinformation can spread: “Plandemic” got an early foothold in anti-vaccine and natural health groups, rapidly hopped to thousands of QAnon and MAGA communities as well as dozens of left-leaning groups, and then continued on to be shared by ordinary people in hundreds of local chat and random individual interest groups. By the time the platforms took it down, it had millions of views, shares, and engagements. The takedown itself spawned a secondary wave of reposts of the video and anger over censorship.

This weekend, again, the urgent need for reliable information was on display: As protests over the death of George Floyd exploded in dozens of American cities, people around the world turned to their screens to try to understand what was happening. Unfortunately, yet again, the need for information on one critically important topic afforded an opportunity for scammers, trolls, clout-chasers, and ideologues to push everything from selectively edited videos to outright rumors. Here, too, the most sensational claims went wildly viral, attracting the attention and shaping the perception of millions; our own research found spammers in Pakistan and Vietnam pushing out fake “Live” videos of policing incidents that had happened years prior, amassing millions of views in hashtags such as #JusticeForFloyd. Two days after the weekend’s protests, the news cycle was full of attempts by journalists to fact-check the most sensational: antifa accounts that were found to belong to a white nationalist group. Photographs of fires and damage  from unrelated events. We also saw (false) suggestions that the U.S. National Guard had a child militia and misidentification of individuals accused of being involved in various types of agitation or misbehavior.

The president’s war with Twitter, however, attempts to recast fact-checking as evidence of tech platform bias or to frame it as censorship. This is politicking, and it’s dangerous. Tech platforms curate the information we see; particularly in times of unfolding crises, they don’t always do a very good job of it. Content is ranked according to what a curation algorithm deems important—a combination of factors, often involving some degree of personalization, that considers what topics are getting the highest engagement across communities we belong to, what sources we read, and what we are most likely to be personally interested in, or click on. We increasingly occupy bespoke realities, tailored to our interests, as determined by algorithms that key off of our prior clicks. What we see is often whatever is getting the most likes. And since sensationalism and outrage drive clicks and views, wild claims regularly trend, particularly during a crisis. One notable example was the trending hashtag #DCBlackout, started by an account with three followers, which claimed that Washington was experiencing deliberate government-initiated wireless outages to prevent activists from coordinating. Twitter suspended hundreds of “spammy accounts” involved and continues to investigate.

This type of viral, sensational misinformation can be deeply harmful. Recognizing this, since 2017, most platforms have developed fact-checking partnerships and other moderation tools to manage it. Moderation options take one of three forms: Platforms can remove content, deleting it from the platform; they can down-rank it, to reduce its distribution; or, they can annotate it with a fact-check presented in close proximity to the original information (such as via a link or an overlay). Trump’s allegations that fact-checking is censorship have it backward: Using a pop-up or interstitial to alert the public that certain content has been disputed is the option that allows bad information to stay up. It preserves free expression.

Unfortunately, the president and his surrogates are relying on convoluted rhetorical arguments to claim that tech platform efforts to surface more reliable content are evidence of anti-conservative bias. The fact-checkers (which include news organizations) are biased, the claim goes; appending a link to a fact check is editorializing, and editorializing is censorship. Sen. Ted Cruz went so far as to claim that appending a fact check to a presidential tweet was an affront to the First Amendment.

This is not unexpected. Allegations of anti-conservative bias have popped up for years whenever a presidential supporter has an account taken down, a tweet deleted, or a sense that Google results ranked them unfairly. One moderation technique to minimize spam and mild harassment based on behavior, known colloquially as “shadow banning” (in which an account is down-ranked in the feed or their tweets are not returned in search results), has been recast as a plot by Big Tech leftists to silence conservative accounts because of their ideological content. These accusations continue to recur despite the fact that no investigation or audit—not even a high-profile effort run by a prominent conservative leader—has found quantitative evidence to support the claim that social network algorithms are deliberately ideologically biased. In fact, investigations have suggested the opposite: Conservative sites and influencers perform remarkably well in recommendation algorithms.

Given that each platform is handling tens of millions of moderation reports in a given month, they do make mistakes. As a result of some of these high profile mistakes or policy gaps, members of nearly every single political and ideological group across the spectrum have at one point claimed that platforms are stifling their beliefs out of deliberate ideological bias.

But among the subgroup of Americans who believe that alternative facts are facts, the claim that fact-checking is anti-conservative censorship is being used to drive political donations and sign-ups to campaign email lists. For nearly two years, Trump supporters have repeatedly heard that tech companies are working to silence them—and many now appear to believe it.

My colleagues’ and my study of the “Plandemic” video, and the waves of additional attention that resulted from its takedown, have strengthened our conviction that the platforms should be doing more fact checks, not fewer. Takedowns of videos viewed millions of times are ineffective; worse, they risk enabling content creators to recast themselves as victims of censorship, taking public focus away from the correction and instead turning the moderation action into a debate about censorship. Informing the public when information that’s being shared widely is flawed or misleading—and doing it in the same place where the information appears, via annotations such as the one that Twitter placed on the president’s tweets—creates an opportunity to challenge the bad information while avoiding any appearance of censorship.

The topics that are subject to content moderation policies are fairly narrowly scoped; currently, they’re largely limited to misleading information about the coronavirus and other health misinformation, and voting. In fact, just a few hours prior to those fact-checked California voting claims, Trump had insinuated that MSNBC’s Joe Scarborough had murdered someone. The supposed victim was a staffer in his office who died of natural causes. Her widower has asked that Twitter take down the tweet, but the post remains live, without even the tyranny of a fact-checking annotation.

The president’s executive order is largely political theater designed to intimidate the companies; he simply doesn’t want to be limited in any way heading into the election, and picking a fight with Big Tech will appeal to his base. However, while legal experts agree that the specifics are largely unenforceable— and lawsuits have already been filed—the shot across the bow may chill moderation policies to some degree; it may make platforms pause on taking action if a prominent supporter of the president has violated a policy.

The First Amendment does not confer a right to not be fact-checked. The idea that powerful politicians should be exempt from a fact check is backward. It is precisely the powerful who need oversight. It’s time we see platforms offer their users reliable, fact-checking more often, not less. Users can still decide whether to read the information or ignore the link. We can debate the composition of the fact-checking bodies, how the information is presented, what the user experience looks like, how the checks are split between human and algorithmic reviewers. But despite the president’s best effort to reframe this conversation, there is one thing that we should not dignify as a topic of debate: Fact-checking is not censorship.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.