Skip to main content

Fifteen important things to say about Facebook, Twitter, and the New York Post’s Hunter Biden story

Nobody looks good here

Share this story

Photo by Michele Doying / The Verge

Yesterday, the New York Post published an article based on what it alleged were emails and photos obtained from Hunter Biden’s personal laptop. The story (and a later follow-up article) focused on Hunter Biden’s ties to Ukrainian energy company Burisma, which have formed the basis for several earlier political attacks on Joe Biden during his presidential campaign. Reporters outside the Post disputed its allegations and its trustworthiness. Then, social media companies stepped in.

Facebook reduced the reach of the Post’s story the morning it was published, saying that it was eligible for fact-checking by the platform’s partners. Twitter went further and banned linking to the story at all, citing a policy against posting hacked information. While both sites have introduced stricter moderation rules in recent months — each banned Holocaust denial posts earlier this week, for instance — it was an unusual crackdown on an investigative story from a well-known print publication. And quickly, the sites’ decisions became the story.

This is a complicated saga, and almost nobody involved comes out looking good. But it illustrates some very obvious problems with political discourse, social media, and how information works on the internet.

1. The New York Post story raises real red flags. The documents are sourced through dubiously trustworthy and politically motivated sources, specifically Trump lawyer Rudy Giuliani, who is linked to an alleged Russian agent accused of election meddling. While opposition research is nothing new or exclusive to Republicans, it’s possible that the alleged Biden emails were doctored, obtained in a way that was less innocuous than a lost laptop, or leaked with the explicit goal of foreign interference in the US election.

2. Russian operatives used social media and leaks from Democratic sources to interfere in the 2016 presidential election on President Donald Trump’s behalf. The Democratic National Committee’s emails were likely exposed by Russian state-sponsored hackers, and the Russian Internet Research Agency created Facebook and Twitter accounts purporting to represent American activists. Social media networks were widely excoriated for failing to act, and they would probably have been criticized for letting the Post claims spread widely — especially Facebook, where CrowdTangle data indicates the story was particularly popular.

3. Facebook and Twitter have a troubling amount of power over online speech. Both platforms could logically ban huge swaths of influential, highly respected investigative journalism under their policies. It’s unclear how Facebook fact-checkers are supposed to verify a story based on private documents obtained by a single news outlet, and “hacked” documents from sources like Chelsea Manning and Edward Snowden formed the basis for award-winning stories at illustrious publications.

4. Twitter and Facebook have the First Amendment on their side. Social media platforms almost certainly have the legal right to ban this story — or even the entirety of the New York Post. And it’s not because of Section 230 of the Communications Decency Act, a perpetual tech policy punching bag for Republicans and Democrats alike. The First Amendment generally protects websites’ right to avoid hosting speech they don’t like, barring special cases, like an antitrust allegation, that don’t make much sense here. (Senator Josh Hawley has argued that banning an anti-Biden story counts as election interference, but that’s quite a stretch.) Anyone who claims this is clearly unlawful behavior, or that the only legal defense is Section 230, is either mistaken or lying.

5. Legality isn’t the only standard at play here. Non-governmental corporate policy has plenty of tangible effects on people’s lives, and for many users, social media is basically the internet. So from here out, we’re not talking about whether Facebook and Twitter can restrict articles like the Post’s, but whether doing so is good for users, journalists, democracy, and the networks themselves.

6. Facebook’s limit seems like an extremely fuzzy application of its anti-misinformation policies. The platform routinely downranks false information, but it appears to have preemptively reduced the Post article’s reach before fact-checking, and while it said the article would be subject to its third-party fact checks, it didn’t have a fact-checking label attached as of this article’s publication.

7. Twitter’s decision was, at least, more clearly explained. The site has rules against publishing hacked information, it’s used these rules to ban links before, and copying personal files from a laptop without permission — if that’s indeed how they were obtained — arguably falls under them. Twitter later elaborated on its decision, saying the Post story included “personal and private information — like email addresses and phone numbers — which violate our rules.”

8. We don’t know how the ban will practically affect Trump or Biden’s campaigns. Many people have suggested that it backfired and gave the articles about Biden more exposure, but it’s hard to tell if it will actually spread the original story’s allegations, if the controversy will just add to the general Republican distrust of social media, or if it will simply get lost in the breakneck election news cycle.

9. This is a platform power problem. If the internet weren’t synonymous with a handful of mega-sites that act as ubiquitous quasi-utilities for global speech, we could simply treat Twitter and Facebook as websites with idiosyncratic community standards. You could link the Post’s article elsewhere if you didn’t like them.

10. People have tried to build alternatives. It hasn’t fixed the problem. Smaller platforms like Parler and Gab exist, and they have a disheartening tendency to devolve into poorly moderated echo chambers devoted to spiting Twitter and Facebook. Decentralized systems like Mastodon are fascinating, but also more confusing than a unified social network. Truly successful alternatives like TikTok embrace totally different styles of communication.

11. This is a worrying precedent. Relying on Facebook and Twitter to save America from misinformation or propaganda entrenches the philosophy that a handful of corporations should be given nearly absolute power over the ideas people can express in both public and private. Twitter’s restriction notably stops people from not only tweeting the link, but from sharing it in a direct message — the platform’s equivalent of an email. That’s not a big deal for a single forum on a bigger internet, but the more powerful these few platforms become, the scarier it sounds.

12. Excellent reporters acting in good faith can still publish false information. Traditionally, reporting mistakes or bad sources are exposed by other journalists, subject matter experts, or sources with firsthand knowledge. Social media companies have none of these prerequisites, and moderation doesn’t offer new information to help readers make up their minds, it just suppresses the original story. This fundamentally short-circuits the normal journalistic process.

13. The normal journalistic process doesn’t necessarily work online. Attention is the prime currency of modern media. Debunking articles still spreads their original allegations. And if people distrust the source doing the debunking, it can reinforce belief in the original story, whether or not it’s true. Misinformation experts have argued that this helps feed stories that are far more bizarre and less credible than the Biden/Burisma controversy — particularly the QAnon conspiracy theory, but also the beliefs of far-right extremists and false claims about voting.

14. It’s naive to treat all news stories as equally trustworthy. Whether or not Facebook and Twitter made the right call here, it’s generally reasonable for moderators to make subjective judgments based on a story’s plausibility, a publication’s track record, or other factors beyond a flat legalistic standard.

15. Social media moderation is a band-aid on widespread institutional failure. Despite all the aforementioned problems, sometimes social media moderation feels like the only card left to play. The Post story reminded many commentators of the controversy over Hillary Clinton’s email server, a relatively minor scandal that the traditional reporting process (along with politicians and the FBI) inflated to nightmarish proportions. Ideally, you would keep conspiracy theories in check with skepticism from the news media, government agencies, or other trusted political leaders — but none of those institutions now have the credibility to effectively argue back. After decades of decline, there’s simply no trust left to draw on.

When everything else feels like it’s breaking down, it’s not surprising that people want Facebook or Twitter to intervene with their clear, unilateral power. Unfortunately, that doesn’t make them the right tool for the job.