Rule one: no harassment/bigotry. Rule two: no photo editing. Rule three: "No makeup" look must have before/after. Rule four: All photos must showcase the makeup clearly and be properly cropped. Rule five: All makeup looks and collections must include a detailed product list. Rule six: All "fake blood/injury" looks must be marked NSFW. Rule seven: No self-promotion. Rule eight: Show the products. Rule nine: Provide the source.

In one of the most seemingly innocuous chat rooms you could imagine, mostly women, and some men, too, share their latest makeup experiments, hoping for some feedback, some piece of critical criticism that'll help them perfect that winged eyeliner, that pristine cut-crease eyeshadow look, that much-sought-after Kylie Jenner pout.

Technically, it's a subreddit, which is a forum devoted to a rather niche, specific area of interest on Reddit. This subreddit, r/MakeupAddiction is one of the most popular, with over 1.6 million members subscribed. And for all of those users, there are only 10 content moderators making sure people don't post something off-topic, crude, or plain trollish. And these are their rules.

Every subreddit has a set of specific set of guidelines that are a tier below Reddit's own policies, which apply across the board. Moderators on r/MakeupAddiction, for example, decided the forum was for sharing new looks and getting feedback without harassment. Interestingly, users who comment on the photos without some feedback or specific comment about the look—but rather type a comment about how pretty the person in the photo is—get downvoted. That means their comment carries less weight and can even be voted down into the negatives, which hides the comment altogether.

Face, Photograph, Text, Head, Skin, Beauty, Website, Screenshot, Editing, Font,
Reddit/Screenshot

In that way, the community self-policed itself. In one case a few months ago, a woman who admittedly did have nice makeup showed way more of her chest in the photo than her actual face. It was downvoted to hell. The users on r/MakeupAddiction more or less said, "We aren't here to see pretty people, we're here to see cool makeup looks." And so, a group of upvote-downvote Reddit warriors joined the growing mass of users who voluntarily moderate the site to keep their spaces sacred.

Not every subreddit is concerned with makeup, of course. Some of the content moderation requires viewing grotesque content, like violent images and videos, or at least inappropriate content, like photos of penises where they don't belong.

Robert Peck, a mod for one of Reddit's top 10 most popular subreddits, r/aww, which specializes in cute animals, wrote about the not-so-fun parts of his job in Wired:

At /r/aww, people don’t always submit pictures of kittens and puppies. Sometimes they post gore porn, or threats to find me and hurt me. My rules are both obvious (kittens are great; no gore porn, no threats) and designed to prevent misuse of the platform (no social media links or handles, and no spamming). At /r/pokemon, I block pictures of, say, caterpillars, because those aren’t Pokémon, are they? No, no, they aren’t.

This all raises the stakes for a company like Reddit: How can you moderate content on a social media website without completely censoring your users? It's something other companies like Facebook and Twitter, in particular, must also grapple with, but it seems like Reddit has the best-tasting recipe.

Reddit's first-ever paid employee, Chris Slowe—an engineer who now works as Chief Technology Officer—says it's a delicate mix of human and computer interaction.

Most of the work is done by humans, not bots, Slowe says. It's not like the outsourced content mod farms in Asian countries, like the Philippines, where people are so stressed out by the work that they're committing suicide in mass numbers. No, these are the subreddit moderators who work as volunteers, the users who upvote good content and downvote the irrelevant. They see value in being mods; they're the rulers of their kingdoms. They're the vigilantes on neighborhood watch in their digital communities.

"If you look at social platforms, they work because 98 percent of people are good and they’re funny," Slowe says. "The problem is the 2 percent who don’t. As a platform, our job and the community's job is to keep a handle on the two percent."

"Think of It Like a Funnel."

youtubeView full post on Youtube

Slowe, who first started at Reddit in 2005, now has about 250 engineers who report to him. Altogether, the company has about 550 paid employees, he says. But back when it was really just himself and cofounders Alexis Ohanian and Steve Huffman he was writing the code for what now is considered Reddit's older web stack.

In those days, the team began modeling their site after Slashdot, a social news website with the slogan, "News for Nerds. Stuff that Matters" that launched in 1997 and is still considered a Reddit competitor. Slashdot resembles today's Reddit in many ways, especially due to the comment threading format and voting systems, which make the sites more democratic in decision-making and, hopefully, make the job of content moderation easier; users are quite literally telling the site what they do and don't like.

    The company has evolved to think about content moderation as a sort of three-tiered system, Slowe says.

    "We have an operations team that is in charge of enforcement of the platform," he says. Their job is to make sure no one is in violation of community rules. These are what you might expect: no child pornography, no harassment, no spam, no soliciting services, the whole shebang.

    In Slowe's words, these are the "set of things that a user flags automatically and percolate up to us."

    Think of it like how a government is set up in layers. “We have to clean up the mess," Slowe says, but it's the smallest part of process, which involves just a few dozen humans (no mod farms, thankfully) and some help from machines through algorithms that detect bad content.

    The human side is important to Reddit, says Slowe. It's better than "leaving it up to some black box machine learning.” And where paid humans mods are required, he says the Reddit team pays special attention to their mental health.

    "They're effectively serving the role of soldiers. PTSD is a real thing for people doing this kind of stressful work."

    Other than that, it's the Wild West out there in the subreddits. So in come the volunteer community watch-guard and their panopticon of moderation. In Slowe terms, this is tier two of that government analogy.

    Volunteer subreddit moderators set rules and how to enforce them with near complete freedom, so long as they're following the community guidelines Reddit has established overhead. In the r/AskScience subreddit, for example, it's a requirement that you give answers to user questions in terminology that a high school student or younger could understand. College profs, beware lecturing here.

    Interestingly, when moderators don't hold up their social contract with Reddit users, those in the subreddit actually push back, Slowe says. They may push out a moderator, create new rules or split the subreddit population into two separate channels with slightly different topics. It can be messy, he says, but that's the nature of people.

    But the most interesting component of Reddit's content moderation system is its community wide upvote/downvote binary.

    “Voting is peer moderation," Slowe stressed. "You shouldn’t downvote if you disagree, but downvote is kind of like the signal that covers ‘I don’t want to see any more of this, go away.’"

    That's different from reporting content that you don't want to see, which is more common on sites like Facebook.

    Keeping content up on Reddit, but allowing it to be downvoted, is also democratic in nature, Slowe argues. Because otherwise, if there are only "positives" in a given subreddit, users will find themselves deep inside an echo chamber.

    The numbers don't lie. Slowe reports that over the last decade, Reddit has seen a ratio of seven upvotes to each downvotes, on average, across the site.

    What's So Broken in Content Moderation?

    Stock Traders Work at Their Computers
    mark peterson//Getty Images

    Well, what's not broken with the system? In an April 2019 report from the Electronic Frontier Foundation, a nonprofit digital rights group established in San Francisco in 1990, the group defined content moderation as "the depublication, downranking and sometimes outright censorship of information and/or user accounts from social media and other digital platforms, usually based on an alleged violation of a platform’s 'community standards' policy."

    The thing is, community standards and policies are, more often than not, made up on the fly, according to sources in the report. In one scenario, Roz Bowden—now a finance administrator for The Royal Society for the Prevention of Cruelty to Animals in Southwater, England—recalled her experience working the graveyard shift for MySpace between 2005 and 2008 during a conversation with the BBC:

    We had to come up with the rules. Watching porn and asking whether wearing a tiny spaghetti-strap bikini was nudity? Asking how much sex is too much sex for MySpace? Making up the rules as we went along. Should we allow someone to cut someone's head off in a video? No, but what if it is a cartoon? Is it OK for Tom and Jerry to do it?

    It was a half-baked approach, Bowden recalled, and the decisions she and her small team, plus dozens more at other social media companies, made continue to shape content moderation as we understand it today.

    Then there's the problem of content moderation farms in far-flung countries, where rich tech moguls don't see the contractors at all. They face extreme stress, sometimes to the point of developing PTSD, and many workers become depressed. In India, some workers are reportedly paid just $6 per hour to go through disturbing images and video for hours on end.

    So is it a people problem? Well, if content moderation is fully done by artificially intelligent systems, the algorithms are still trained by people and have bias, by definition. There's a famous example of a Google algorithm that detected African Americans as gorillas, and that's only one of many times that AI hasn't panned out so well.

    Besides, do we really want computers alone to tell us what to do? You can almost imagine HAL as a content moderator, watching over your shoulder as you type something up and look to the disembodied voice for permission to post.

    "I'm sorry Dave, I'm afraid I can't do that."

    How Will Tech Change Moderation?

    WORLDZ Cultural Marketing Summit 2017
    Jerod Harris//Getty Images

    Still, new technology is inevitably going to be part of the evolution of content moderation. We've already seen that in the nearly 15 years that Reddit has existed.

    Combining humans and software equitably is the key, says Reddit cofounder Alexis Ohanian. Though he no longer works for Reddit, Ohanian did come back to the company in 2014 after a hiatus to help out as executive chairman. He said that the company's use of people, artificial intelligence, and machine learning is the magic behind Reddit's unique philosophy toward content moderation.

    "We really combine human moderation with software and, in particular, automating the filtering of content through AI and ML," Ohanian told Popular Mechanics at the World Congress on Information Technology held in Yerevan, Armenia last month."We can see a much more effective system in place and the techniques and the tools just really didn’t exist 10 years ago, just were not possible."

    That tech is making content moderation much easier and efficient, Ohanian said. "So it’s an exciting thing because we think we’re on the cutting edge."

    Slowe concurs. He says the bots in Reddit's ecosystem are one of the site's greatest assets. And they're usually all built by users themselves, not the company.

    "Most of the bots are actually kind of good bots, and they’re people writing simple scripts,” says Slowe.

    There's a bot that keeps track of how many users reply to a bot's comments with "good bot" or "bad bot" to keep a tally of the most and least useful programs. There's a TLDR bot that summarizes comments. There are bots that resize GIFs for phones. It's all really creative, Slowe says, but if they bots are spammy, Reddit gets rid of 'em.

    When it comes to machine learning, Reddit lets the software do the work when there's a gray area for an admin or moderator. Plus, there's just not enough manpower to physically check every comment. Slowe says there are about 1 million active monthly users for each employee at Reddit.

    Still, he says the beauty of Reddit lies in letting people be people. "There’ll be things that are gray and edgy, but you deal with them.”

    “Online civilization is just another kind of civilization," Slowe continues. "You have actual police who police it and regular people who pay attention and report things. It's worked for over 10 years, and we can only hope that it works for another 10."

    Headshot of Courtney Linder
    Courtney Linder
    Deputy Editor
    Before joining Pop Mech, Courtney was the technology reporter at her hometown newspaper, the Pittsburgh Post-Gazette. She is a graduate of the University of Pittsburgh, where she studied English and economics. Her favorite topics include, but are not limited to: the giant squid, punk rock, and robotics. She lives in the Philly suburbs with her partner, her black cat, and towers upon towers of books.