DON'T SAY SKYNET

Facebook didn’t kill its language-building AI because it was too smart—it was actually too dumb

People are silhouetted as they pose with mobile devices in front of a screen projected with a Facebook logo, in this picture illustration taken in…
People are silhouetted as they pose with mobile devices in front of a screen projected with a Facebook logo, in this picture illustration taken in…
Image: Reuters/Dado Ruvic
We may earn a commission from links on this page.

I get it.

Humans are weak and fleshy, while machines are big and metal and good at math—it’s easy to think that they’d want to kill us.

Recent headlines and news articles have depicted research from Facebook’s artificial intelligence lab as the sci-fi beginnings of killer AI: one bot is learning to talk to another bot in a secret language so complex that it had to shut down. A BGR headline reads ”Facebook engineers panic, pull plug on AI after bots develop their own language,” and Digital Journal claims in its article that “There’s not yet enough evidence to determine whether they present a threat that could enable machines to overrule their operators.” Dozens of similarly misleading articles have been written.

Secret languages sound nefarious—what could they be talking about? What do the computers have to hide? The research causing such concern, which Quartz covered in June, was a study to see whether Facebook could get bots to negotiate. If there were 2 pears, an orange, and an apple, and each bot wanted a specific piece of fruit, could they divvy them up so everyone got what they wanted?

The bots did exactly what they were programmed to do: haggle over fake objects. They developed a new way of communicating with each other, because computers don’t speak English—just like we use x to stand in for a number in math, the bots were using other letters to stand in for longer words and ideas, like “iii” for “want” or “orange.”

But that’s not nefarious. Machines and humans simply don’t think the same way. To an AI, the idea of an apple is just a long string of numbers quantifying the color red and the word “apple” and its relationship to other fruits. These machine representations aren’t intuitive or even understandable by humans, but we can still measure what a machine “thinks” by making it do a task. If an image recognition AI can recognize pictures of an apple in different lighting situations, it understands apple, no matter what numbers it uses to do so inside. (It’s actually a problem that we don’t understand machines, not because they’re trying to kill us, but because it’s difficult to tell how biased they are.)

Facebook’s goal was to make bots that communicated in English—but since the bots ended up building their own little code, Facebook stopped working on the prototype, and built another, smarter bot that did work in English.

“While the idea of AI agents inventing their own language may sound alarming/unexpected to people outside the field, it is a well-established sub-field of AI, with publications dating back decades,” Facebook researcher Dhruv Batra wrote in a blogpost last night, calling recent coverage irresponsible. “Simply put, agents in environments attempting to solve a task will often find unintuitive ways to maximize reward.”

That’s to say: Machines don’t think or talk like humans, so calm down. Perhaps the more concerning piece of news should be not that Facebook’s bots invented a machine-readable language, but rather that its successful English-speaking bot eventually learned to lie.

Read our original coverage here.