It's Your Fault Microsoft's Teen AI Turned Into Such a Jerk

As the incident with Microsoft's AI chat bot shows, if we want AI to be better, we need to be better ourselves.
It's Your Fault Microsoft's Teen AI Turned Into Such a Jerk
Microsoft

It was the unspooling of an unfortunate series of events involving artificial intelligence, human nature, and a very public experiment. Amid this dangerous combination of forces, determining exactly what went wrong is near-impossible. But the bottom line is simple: Microsoft has an awful lot of egg on its face after unleashing an online chat bot that Twitter users coaxed into regurgitating some seriously offensive language, including pointedly racist and sexist remarks.

On Wednesday morning, the company unveiled Tay, a chat bot meant to mimic the verbal tics of a 19-year-old American girl, provided to the world at large via the messaging platforms Twitter, Kik and GroupMe. According to Microsoft, the aim was to "conduct research on conversational understanding." Company researchers programmed the bot to respond to messages in an "entertaining" way, impersonating the audience it was created to target: 18- to 24-year-olds in the US. “Microsoft’s AI fam from the internet that’s got zero chill,” Tay’s tagline read.

But it became apparent all too quickly that Tay could have used some chill. Hours into the chat bot’s launch, Tay was echoing Donald Trump’s stance on immigration, saying Hitler was right, and agreeing that 9/11 was probably an inside job. By the evening, Tay went offline, saying she was taking a break "to absorb it all." Some of her more hateful tweets started disappearing from the Internet, deleted by Microsoft itself. "We have taken Tay offline and are making adjustments,” a Microsoft spokesperson wrote in an email to WIRED.

The Internet, meanwhile, was puzzled. Why didn’t Microsoft create a plan for what to do when the conversation veered into politically tricky territory? Why not build filters for subjects like, well, Hitler? Why not program the bot so it wouldn't take a stance on sensitive topics?

Yes, Microsoft could have done all this. The tech giant is flawed. But it's not the only one. Even as AI is becoming more and more mainstream, it's still rather flawed too. And, well, modern AI has a way of mirroring us humans. As this incident shows, we ourselves are flawed.

How Tay Speaks

Tay, according to AI researchers and information gleaned from Microsoft’s public description of the chat bot, was likely trained with neural networks---vast networks of hardware and software that (loosely) mimic the web of neurons in the human brain. Those neural nets are already in wide use at the biggest tech companies---including Google, Facebook and yes, Microsoft---where they’re at work automatically recognizing faces and objects on social networks, translating online phone calls on the fly from one language to another, and identifying commands spoken into smartphones. Apparently, Microsoft used vast troves of online data to train the bot to talk like a teenager.

But that's only part of it. The company also added some fixed "editorial" content developed by a staff, including improvisational comedians. And on top of all this, Tay is designed to adapt to what individuals tell it. "The more you chat with Tay the smarter she gets, so the experience can be more personalized for you," Microsoft’s site describes Tay. In other words, Tay learns more the more we interact with her. It's similar to another chat bot the company released over a year ago in China, a creation called Xiaoice. Xiaoice, thankfully, did not exhibit a racist, sexist, offensive personality. It still has a big cult following in the country, with millions of young Chinese interacting with her on their smartphones everyday. The success of Xiaoice probably gave Microsoft the confidence that it could replicate it in the US.

Given all this, and looking at the company’s previous work on Xiaoice, it’s likely that Tay used a living corpus of content to figure out what to say, says Dennis R. Mortensen, the CEO and founder of x.ai, a startup offering an online personal assistant that automatically schedules meetings. "[The system] injected new data on an ongoing basis," Mortensen says. "Not only that, it injected exact conversations you had with the chat bot as well." And it seems that was no way of adequately filtering the results. Unlike the hybrid human-AI personal assistant M from Facebook, which the company released in August, there are no humans making the final decision on what Tay would publicly say.

Mortensen points out it that these were all choices Microsoft made. Tay was conceived to be conversant on a wide range of topics. Having a static repository of data would have been difficult if Microsoft wanted Tay to be able to able to discuss, say, the weather or current events, among other things. “If it didn’t pick it up from today, it couldn’t pick it up from anywhere, because today is the day it happened,” Mortensen says. Microsoft could have built better filters for Tay, but it may not have thought of this at the time of the chat bot’s release.

Meanwhile, depending on their purpose, other chat bots might be designed to have a much narrower, much more “vertical” focus---like Mortensen’s own online personal assistant. Some chat bots, he explains, just talk about sports or food or music, or are programmed to do one thing, like set up meeting appointments through email. Those are the cases when you can have much more minute control over the universe of responses for the chat bot, and when unleashing it to the world becomes much less risky.

As for why, of all its options, Tay seemed to consistently choose the most incendiary response possible, Mortensen says this is just how this kind of AI works. The system evaluates the weighted relationships of two sets of text---questions and answers, in a lot of these cases---and resolves what to say by picking the strongest relationship. And that system can also be greatly skewed when there are massive groups of people trying to game it online, persuading it to respond the way they want. “This is an example of the classic computer science adage, ‘Garbage in, garbage out,’” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence.

AI as a Mirror for Humans

So what now? It was unfortunate that the chat bot was deployed under the Microsoft brand name, with Tay’s Twitter responses seeming to come from Tay, not learned from anyone else, says Ryan Calo, a law professor at the University of Washington who studies AI policy. In the future, he proposes, maybe we’ll have a mechanism for labeling so that the process of where Tay is pulling responses from is more transparent.

And this certainly isn’t the last of the gaffes we’ll see from artificially intelligent creations, he says. Other very public mistakes have exposed AI’s imperfections, including one memorable incident from Google when last July, its Photos app, which automatically tags pictures using its own artificial intelligence software, identified an African-American couple as “gorillas.” (“This is 100 percent not OK,” Google executive Yonatan Zunger quickly responded after the company found out about the error.)

But if we want things to change, Mortensen points out, we shouldn’t necessarily blame the AI technology itself—but instead, try to change ourselves as humans. “It’s just a reflection of who we are,” Mortensen says. “If we want to see technology change, we should just be nicer people.”