Skip to main contentSkip to navigationSkip to navigation
Boston Dynamics’ robot dog, Spot.
Boston Dynamics’ robot dog, Spot. Photograph: Patrícia de Melo Moreira/AFP/AFP via Getty Images
Boston Dynamics’ robot dog, Spot. Photograph: Patrícia de Melo Moreira/AFP/AFP via Getty Images

Yes, hyena robots are scary. But they're also a cunning marketing ploy

This article is more than 4 years old

There’s something unsettling about a private firm making powerful autonomous machines – but what’s scarier is who’s building them, and why

Earlier this year, videos of a robot being kicked, hit with a chair, and shot at by its human owners spread online. Created by an LA-based production company, Corridor Digital, the videos were a parody of those released by Boston Dynamics, a company that has been making robots since 1992.

You’ve almost certainly seen their videos. A robotic cheetah sprints across a parking lot. A robotic dog takes on a human in a tug of war. Sometimes the robots are cute, like the Sand Flea, which flicks itself effortlessly over 30ft walls. Sometimes they’re scary: like the android who does parkour.

But the general tone of the videos is ominous – like the one of this militaristic looking robot called BigDog being kicked and then recovering. Watching the machine regain composure is chilling. You almost expect it to turn around and retaliate, hinting at a future when this might, in fact, happen.

For this reason, on the release of almost every new Boston Dynamics video, the internet lights up with commentary about how we’re all doomed, how the robot apocalypse is nigh. The parody videos made by the LA production leveraged this reaction and indulged the fantasy: the series ends with the CGI robot replicas holding humans at gun point.

That we respond with a sense of fear to sophisticated-seeming robots makes sense. Robots have been associated with a narrative of insurrection and replacement since they made their debut in Karel Capek’s 1920 play, Rossum’s Universal Robots, which was about worker robots violently overthrowing their human overlords. This narrative has been repeated ever since in movies like 2001: A Space Odyssey, Terminator, and more recently, the HBO series Westworld.

The purpose of these narratives has been to use the robot as humanity’s mirroring other, similar enough to force us to reflect, with some productive distance, on issues of exploitation, labor, slavery, bigotry, revolution. In other words, the robot apocalypse narrative speaks to our deepest fears about ourselves.

But with their videos of robot cheetahs and back-flipping androids, Boston Dynamics have exploited this narrative as a marketing tool. This is a cunning trick. By appropriating the sci-fi narrative in their videos, Boston Dynamics is, for one, overselling their robots. (They are nowhere near strong, smart, or robust enough to challenge human supremacy, whatever that means.)

This has created hype and a brand identity around the company. In September this year, they started selling their SpotMini, a camera equipped dog-like, door-opening robot, and already they’ve had a “deluge” of interest.

The deeper problem, though, is that conflating real robots with make-believe robots distracts and obfuscates questions of agency and responsibility.

There is something deeply unsettling about a private company making agile, powerful, autonomous machines. But what’s scary isn’t the robot. Rather, it is who is building them and to what end.

The fact that Boston Dynamics has been repeatedly funded by Darpa, an agency of the United States Department of Defense? That’s something to think about. The fact that Boston Dynamics are advertising SpotMini for surveillance tasks? That’s something to think about. The robot apocalypse, not so much.

Bootstrapping the narrative of robot apocalypse into discourse about real technology pervades the discourse of hi-tech. Elon Musk talks endlessly of the singularity; even the more measured Sundar Pichai anthropomorphizes AI. Ultimately, this distracts us from thinking about digital infrastructures built out of unaccountable practices. It makes it harder for us to think through complex cases, like when Uber’s self-driving car hit a pedestrian. As recent documents show, this had nothing to do with the robot car and everything to do with shoddy engineering and a bad safety culture.

Boston Dynamic’s videos are, to be sure, entertaining. But by raising the alarm of robot takeover they achieve the self-serving purpose of reasserting unrealistic fantasies about technology’s power, while redirecting away from more critical examination of human decisions and design practices.

Explore more on these topics

Most viewed

Most viewed