Silicon Valley

Can Silicon Valley Disrupt Its Neo-Nazi Problem?

Tech leaders still have no coherent vision for how to police hate speech without becoming tyrants, themselves.
From left Mark Zuckerberg NeoNazis march in Charlottesville VA on August 12th Jack Dorsey of Twitter.
From left; Mark Zuckerberg, Neo-Nazis march in Charlottesville, VA on August 12th, Jack Dorsey of Twitter.From left; by Drew Angerer/Getty Images, by Chip Somodevilla/Getty Images, by Michael Nagle/Bloomberg/Getty Images.

There’s a hypothetical question about Nazis that is often posed to non-Nazis: if you could go back in time and kill baby Adolf Hitler, would you? Believe it or not, it’s actually a tough question for a lot of people to answer, with almost a third of those asked saying they’re unsure. I get it. Killing a baby is hard. Even if he did grow up to murder 6 million innocent people. But for those not so sure about such ethical and moral quandaries, let me ask an easier hypothetical question: if social media existed in the 1930s and 40s, and you could go back in time and ban Hitler from using Twitter (and a dozen other platforms) would you? To me, that question is a lot easier to answer with an unequivocal yes. But for most companies in Silicon Valley, it’s not so simple.

It’s not Hitler they have to worry about, but rather the divisive and disgusting views of his modern-day disciples, who this past weekend wreaked havoc in Charlottesville, Virginia, when a white supremacy rally turned deadly. (The response to the rally has also proved to be one of the most diabolical moments of Donald Trump’s presidency so far.) You don’t have to look far to see just how much of a role the tools built by Silicon Valley played in this melee. For decades, fascist pro-Nazi thinking has (mostly) been shunned by American society, relegated to mountain tops in the South where Klan members hid their identities behind white sheets. But what’s become increasingly apparent, especially in Donald Trump’s rise to the presidency and now in the days since the fatal protest, is that the technologies we all use today are allowing these groups to proliferate and collocate in ways not seen since the 1930s.

Just ask the neo-Nazis and white supremacists in Charlottesville, and they’ll proudly admit as much. In a Vice video shared widely online that documented the organizing of the rally, Robert “Azzmador” Ray, a self-proclaimed feature writer for the neo-Nazi Web site the Daily Stormer, blatantly pointed out that the journey from ideology to marching with weapons in Charlottesville started on the Internet. “For one thing [today] means that we’re showing this parasitic class of anti-white vermin that this is our country,” he rattled off in the video, and then the clincher: “As you can see, we are stepping off the Internet in a big way.” Ray then proudly pointed out that all these tech tools we use to share news articles and talk about the latest episode of Game of Thrones have enabled white supremacists to see that “they are not atomized individuals” and that they are part of a larger whole that was brought together by “organizing on the Internet.” Ray is not alone with this belief. In December, white supremacist Richard Spencer offered a prelude to all of this in a separate Vice interview when he noted, “We memed the alt-right into existence.”

If that’s not proof enough, Kevin Roose reported this week in The New York Times that to “alt-right” groups, technology is the oxygen that keeps them alive. He discovered this by lurking on message boards related to their upcoming rally, where he witnessed the beginning, middle, and end of these tools at work. “They posted swastikas and praised Hitler in chat rooms with names like ‘National Socialist Army’ and ‘Führer’s Gas Chamber,’” Roose wrote, and “they organized last weekend’s ‘Unite the Right’ rally in Charlottesville, Va., connecting several major white supremacy groups for an intimidating display of force.” (While these groups did use Twitter and other social sites to spread hate online, the Charlottesville organizers primarily used a messaging service called Discord.)

Granted, without the Internet, some of these Nazis and white supremacists would have found each other regardless (as they have during other protests over the years). Racial terror, it goes without saying, predates the Internet. But in an era when it is becoming harder to succeed in polite society as an uncloseted racist—with a few notable exceptions—white supremacists have found comfort in the anonymity of online avatars. When they descended on Charlottesville this past weekend, they found courage, and a kindred hatred, in the anonymity of the crowd.

In the days since the chaos in Charlottesville, Silicon Valley has been suffering from its own moral conundrum over what to do about this reality. For over a decade, tech companies have offered the mantra that they don’t decide what people say on their platform, and that mostly unmitigated free speech is the only solution to allowing the free flow of information. For Twitter, we’ve all seen how that worked out. But now, after a woman died and the Herculean response and outrage from that attack has gone global, tech companies have been vociferous in their decision to stand up to hate groups—Nazi hate groups, at least. But it’s unclear what the actual rules are. No one has a clear idea of what is accepted and what is not on these platforms and, in many respects, that ambiguity is more dangerous than anything.

For example, last year, Twitter permanently suspended Milo Yiannopoulos, the conservative far-right blogger, after he led a racist harassment campaign on Twitter against Ghostbusters actor Leslie Jones. But countless other accounts linked to neo-Nazis and hate groups remained on the site long after Yiannopoulos was booted. It took four days after the Charlottesville attacks for Twitter to ban the Daily Stormer neo-Nazi Twitter account, and those associated with it. And yet Yiannopoulos is still on Facebook yapping away about how dumb and terrible the “alt-left” is, with lovely observations such as, “Liberals are such idiots.” The chief technology officer of the Daily Stormer, Andrew “Weev” Auernheimer, is on Facebook too, though Facebook and Instagram announced on Wednesday that they were banning another neo-Nazi account associated with Christopher Cantwell, the racist white nationalist in the Vice video I mentioned earlier. (I know, this is all very confusing, but don’t even get me started on what Facebook allows, or ignores, in other countries.) Spotify is removing white supremacist bands from its service, and GoDaddy and Google have disabled the domain registry of The Daily Stormer, but the site’s C.T.O. is still active on YouTube and other Google platforms. Caving to pressure after three people died as a result of the protests, Discord, the platform the alt-right used to organize the Charlottesville rally, has since banned neo-Nazi groups from the service, but there are many who are now heading to Skype to organize.

In fairness to these social networks, figuring out who to disable is not easy. On Twitter, for example, anyone else with an account that tweeted like Donald Trump might have been banned long ago for violating the company’s terms of service. But kicking Trump off today would be corporate suicide. On Facebook, Mark Zuckerberg said Wednesday that, “we’ve always taken down any post that promotes or celebrates hate crimes or acts of terrorism . . . with the potential for more rallies, we’re watching the situation closely and will take down threats of physical harm.” But there’s a problem there, too. As anyone who has seen their personal information shared online or pummeled by an anonymous mob will tell you, the threat of a physical violence is a lot less scary and painful than the atrocities an army of pathetic trolls can accomplish in just a few hours on the Internet. Tech companies cannot rely on algorithms alone to detect what’s really harmful and what is not, either. For example, people have manipulated Facebook’s page recommendations to suggest that a neo-Nazism page is similar to the U.S. Army, Nate Silver, and the California fast-food chain, T.K. Burger. A page on National Socialism, or “Hitlerism” as it’s labeled, suggests that former President Barack Obama and a cosmetics store in Brazil are both pro-neo Nazi, too.

When companies do decide to pull the plug on these hate groups, it can leave them feeling like they’ve done the right thing on one hand, and wrong on the other. Cloudflare, the Internet security service used by the Daily Stormer, decided Wednesday that it was going to kick the neo-Nazi group off its platform. Cloudflare’s C.E.O. was torn over his decision. “I woke up this morning in a bad mood and decided to kick them off the Internet,” Matthew Prince, the company’s chief executive said in a statement. “Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.” Now before you go and give Prince credit for what seems like a good deed, in May ProPublica wrote about Cloudflare, and found that it not only helped protect neo-Nazi sites online for months, but that it also passed along personal information to those sites about people who complain about their content, often leading to trolls coming after the people behind the complaints. In 2013, Prince defended his view on sites like The Daily Stormer, writing in a blog post that, “A website is speech. It is not a bomb.”

Last year, when it became clear that Facebook’s lax guidelines and almost non-existent oversight had allowed Russians to create, and Americans to share, fake news that arguably helped Trump become president, a Facebook employee told me that it’s a slippery slope for the social network to decide what is “fake news” and what is not, and that it would be a terrible idea for Facebook to have to decide what to delete from the Web site. Fox News, the employee noted, often skews its perception of a story with a conservative bent—“is that fake?” the employee asked. Or when The New York Times makes a big mistake on an article, is that fake? It’s unclear how you would ever score what’s fake and what’s real on stories being shared from sites like Breitbart or InfoWars. A tech investor I spoke with noted that it’s not easy for companies to just ban groups because we, as a society, don’t agree with their belief systems. The better route, the investor argued, is to just ignore these groups and let them fade off into oblivion. But that doesn’t take into account the media attention these organizations get.

The American Civil Liberties Union is arguably the one with the most experience in this space, having defended people like Yiannopoulos, and numerous neo-Nazi groups’ First Amendment right to assemble, and that all speech is free speech. And while there are some within the organization that disagree with the A.C.L.U.’s stance on these topics—and even quit the A.C.L.U. in the late-1970s over a similar Nazi protest—the nonprofit law group continues to do what it’s done for decades, representing Nazis and members of the Ku Klux Klan. We may not agree with the ACLU, but we know where the group stands. But when it comes to tech, that is not the case. The industry as a whole doesn’t have a clear vision of where they will draw the line, and where they will not. And within each company, the rules are vague at best. To me, the ambiguity of companies like Facebook, Twitter, GoDaddy, and Cloudflare being able to decide in the spur of the moment what can stay and what must go is more terrifying than neo-Nazis using their platforms. Just last week, all of these companies were perfectly O.K. hosting domains for the Daily Stormer, and after the backlash post-Charlottesville, they are not. “No one should have that power.”