Facebook’s dilemma: Its filtering algorithms just aren’t smart enough

US-IT-INTERNET-FACEBOOK-MESSENGER
Facebook CEO Mark Zuckerberg speaks at the F8 summit in San Francisco, California, on March 25, 2015. Zuckerberg introduced a new messenger platform at the event. AFP PHOTO/JOSH EDELSON (Photo credit should read Josh Edelson/AFP/Getty Images)
Photograph by Josh Edelson — AFP/Getty Images

By now, we’re all pretty used to the idea that Facebook is massive — so massive that it dwarfs any web service most of us would have used in our lifetimes. A billion users in a single day. Close to a billion photos posted every day. Three billion videos viewed every day. And driving all of that is a powerful set of algorithms that determine what we see and when. But is that a good thing or a bad thing? It’s complicated.

Realistically, using algorithms is the only way that an entity like Facebook (FB) could hope to exist, or to get as large as it has. If every post or photo or video had to be looked at by a human being, it would take years to go through what Facebook publishes in a single month, and the company would have to hire millions of employees (it has about 10,000).

Algorithms as a method for filtering content, however, can lead to a number of problems. For example, depending on how the Facebook news-feed is calibrated, and the behavioral signals it is looking at, a person’s feed might be full of light-hearted ice-bucket challenge videos — even though there is a compelling (and disturbing) news event going on at the same time, like the shooting of Michael Brown in Ferguson, Mo.

Or, to take a more recent case, those algorithms (combined with the work of human editors) can result in powerful images of Syrian refugees being removed because they might disturb some users. Similarly, images of terrorist activity may be taken down, even though they are providing an important journalistic record of war crimes.

In some cases, these filters can cause problems in the opposite direction as well. For example, speech that might be racist or hate-filled remains — as some of the discussion about European refugees has, to the point where the German government is asking Facebook for help in removing some of it.

Part of the problem is that Facebook doesn’t have a lot of finely-tuned ways for people to express themselves. There’s mostly just the ubiquitous “like” button, or the comment field. This is why the site has been working on what some have erroneously called the “dislike” button, which is really intended to be a way of expressing more complex emotions, for responding to things like death or a complicated social and political topic like the refugee crisis.

During a recent Q&A at Facebook headquarters, founder and CEO Mark Zuckerberg talked about the difficulties of filtering and fine-tuning the user experience when it comes to content like the photo of a three-year-old Syrian boy lying dead on a beach in Turkey, which triggered a global debate on that kind of imagery recently.

“This is an area where we can certainly do better,” Zuckerberg said. “Under the current system, our community reports content that they don’t like, and then we look at it to see if it violates our polices, and if it does we take it down. But part of the problem with that is by the time we take it down, someone has already seen it and they’ve had a bad experience.”

The promise of artificial intelligence, said the Facebook founder, is that some day computers might be able to filter such content more accurately, and allow people to personalize their news-feed. “But right now, we don’t have computers that can look at a photo and understand it in the way that a person can, and tell kind of basic things about it… is this nudity, is this graphic, what is it,” he said.

Zuckerberg said that in the case of the Syrian child lying dead on the beach, he thought that image was very powerful, because it symbolized a huge problem and crystallized a complex social issue. “I happen to think that was a very important photo in the world, because it raised awareness for this issue,” he said. “It’s easy to describe the stats about refugees, but there’s a way that capturing such a poignant photo has of getting people’s attention.”

The social network is trying to find a way to balance the desire of users to not be offended by such content with the need to show such imagery in certain cases, Zuckerberg said. “It’s an interesting balance in terms of running this community. Where do you fall on that spectrum? Because sometimes it’s important for people to see things even if they disagree with them or are upset by them.”

Unfortunately, with billions of pieces of content uploaded every day, Facebook is still going to get those kinds of decisions wrong a lot of the time — at least, until it develops the kind of super-smart AI that the Facebook founder is talking about. And even then, there will likely be times where it goes astray, if only because human emotions and behavior are so difficult to fathom sometimes. Even people aren’t that good at it.

You can follow Mathew Ingram on Twitter at @mathewi, and read all of his posts here or via his RSS feed. And please subscribe to Data Sheet, Fortune’s daily newsletter on the business of technology.

Subscribe to Data Sheet, our daily newsletter about the business of tech. Sign up for free.