Technology is not Neutral

Facebook’s advertising algorithms are back in the news after ProPublica discovered you could target antisemitic categories in your ad campaigns. The publication has previously revealed that Facebook housing ads allowed landlords to exclude potential tenants based on their race.

So let’s break this down, from like a moral/technical perspective. What Facebook has done is it’s created a piece of “neutral” technology. The ad-buying algorithm is a way for ad buyers to target audiences with relevant ads, based on information Facebook has gathered or inferred from its users. That’s straightforward and certainly sounds harmless. But Facebook failed to account for how its software would be used. For how its software could be used.

See, when you create technology based on real people, your technology is necessarily imbued with the biases and prejudices of those people. When Facebook runs an algorithm to determine whose attention is worth how much, its factoring in a lot of signals from people. Those signals carry bias, so the algorithm carries bias. The same premise can be applied to machine learning, too, which also suffers from this problem.

This is why when Microsoft released a “neutral” chat bot into Twitter, it got really racist really quickly. Microsoft failed to account for what other bot-makers already knew: without foresight and prevention, technology can/will be used in ways that violate its own creators’ values.

Put another way: unless a technologist thinks ahead and puts in safeguards, they cannot control how the technology they create will be used.

There are moral implications to building technology, and this is one of them: if your goal is neutrality, you must tilt the scale in favour of those people that our society marginalizes. You must overshoot towards social justice. Otherwise, your “neutral” technology only reinforces the existing imbalances in our society.

Facebook is asking us to blame their algorithm for being racist, when in fact Facebook are the ones who messed up. They failed to anticipate these problems when they should have thought more carefully about how their software could be used. This should serve as a valuable lesson to all of us.


Please submit typo corrections on GitHub