clock menu more-arrow no yes mobile

Filed under:

Humans keep directing abuse — even racism — at robots

We abuse our robots. That’s a problem.

Black and white cyborg women hugging
People treat white robots better than black robots, a recent study found.
Getty Images/Tetra images RF
Sigal Samuel is a senior reporter for Vox’s Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic.

People can be really mean to robots. We humans have been known to behead them, punch them, and attack them with baseball bats. This abuse is happening all over the world, from Philadelphia to Osaka to Moscow.

That raises the question: Is it unethical to abuse a robot? Some researchers have been wrestling with that — and figuring out ways to make us empathize more with robots.

A study published this month in Scientific Reports found that there’s a simple way to achieve that goal. If you give someone a 3D head-mounted display (basically, a fancy set of goggles) and “beam” her into a robot’s body so she sees the world from its perspective, you can change her attitude toward it.

“By ‘beaming,’ we mean that we gave the participants the illusion that they were looking through the robot’s eyes, moving its head as if it were their head, looking in the mirror and seeing themselves as a robot,” explained co-author Francesco Pavani of the University of Trento. “The experience of walking in the shoes of a robot led the participants to adopt a friendlier attitude.”

This research adds to recent philosophical work on “robot rights” and our moral intuitions about robots. One common intuition is that if we one day manage to create a sentient robot, we’d have a duty to treat that robot ethically.

For example, the philosopher Peter Singer recently told me that the question of whether future robots should be included in our moral circle — the imaginary boundary we draw around those we consider worthy of moral consideration — is straightforward. “If AI is sentient, then it’s definitely included, in my view. If not, then it’s not.”

Our current machines are nowhere near sentient, yet the ethical quandaries around them are already starting to grow urgent. Human suspicion and resentment of robots is real, to the point where it can get violent. In some cases, it seems the violence stems from people’s fears — that robots will steal their jobs, say, or one day mount a violent insurrection against human overlords.

Gender and racial bias also factor into our treatment of robots. Research shows that when turning to a machine for help, people prefer to hear a female voice, but they’ll sometimes lash out at “her” if the request is denied. They prefer a male voice when it comes to authoritative statements.

And people have such deeply embedded racial biases that they even treat white robots better than black robots, according to a recent study out of the Human Interface Technology Lab in New Zealand. The study took the form of a “shooter bias test”: Participants had to assess threat level as images of black and white people flashed before them, with images of black and white robots thrown in here and there. Black robots got shot more often than their white counterparts.

“The bias against black robots is a result of bias against African Americans,” said lead author Christoph Bartneck.

For now, faced only with non-sentient robots, we probably don’t need to worry about our bad behavior having deleterious effects on them. But some researchers do worry about it having deleterious effects on us, skewing our morality. If we hurl sexual abuse at female voice assistants like Alexa or direct racism at black robots — and get zero pushback because “it’s just a robot” — that could make us more inclined to mistreat actual women and people of color.

Over-empathizing with robots comes with risks too

The risk that our abusive behavior toward robots will degrade our character is not the only thing we need to worry about when it comes to our relationship with them.

What if, some researchers ask, we start to empathize too much with robots — and that makes us prefer robots to humans?

In a paper published this week by the Montreal AI Ethics Institute, Camylle Lanteigne of McGill University argues that empathizing with robots risks actually reducing our empathy for people, because “social robots remove the need to empathize with other human beings … by making it possible to have exactly what we want, without compromise.”

Lanteigne explains why she thinks it’s likely that humans will come to prefer relationships with robots:

A social robot that is always in a good mood, always does what we ask it to, and happens to suit our fancy in all the other ways suddenly seems like a plausible romantic partner, as it does not present any disagreeable behavior that would make it difficult for us to empathize with it.

[This will lead to] the inconsiderate and unempathetic treatment of human beings precisely because they are too human, too complex and unpredictable in comparison to the tailored-to-our-every-desire social robot.

If it seems hard to believe you might ever prefer a machine to a flesh-and-blood romantic partner, consider that some people are already exhibiting that tendency. In China, for example, millions of women have downloaded an app that lets them carry out relationships with virtual boyfriends. Women know these boyfriends are preprogrammed machines, yet they happily pay to hear the “men” whisper sweet nothings in their ears.

So what’s the bottom line? Should we be trying to empathize with robots more? Less? Differently? The only thing that’s really clear for now is that robots currently inhabit a moral gray zone. There’s a lot of work to be done in this space — hopefully before sentient AI appears on the horizon.

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.