Will 'Deepfakes' Disrupt the Midterm Election?

Advances in machine learning allow almost anyone to create plausible imitations of candidates in video and audio, potentially sowing confusion.
Image may contain Tie Accessories Accessory Coat Clothing Overcoat Apparel Suit Face Human Person and Frown
The real Donald Trump.Win McNamee/Getty Images

Plenty of people are following the final days of the midterm election campaigns. Yale law researcher Rebecca Crootof has a special interest—a small wager. If she wins, victory will be bittersweet, like the Manhattan cocktail that will be her prize.

In June, Crootof bet that before 2018 is out an electoral campaign somewhere in the world will be roiled by a deepfake—a video generated by machine-learning software that shows someone doing or saying something that in fact they did not do or say. Under the terms of the bet, the video must receive more than 2 million views before being debunked. If she loses, Crootof will owe a sporting tiki drink to Tim Hwang, director of a Harvard-MIT project on ethics and governance of artificial intelligence. If she wins, it will validate the fears of researchers and lawmakers that recent AI advances could be used to undermine democracy.

The US midterms are seen as a possible target that could prove the pessimists right. Facebook says the elections have already attracted other, more conventional disinformation campaigns. Crootof says a poorly-timed (or well-timed, depending on your perspective) deepfake could undermine the whole process. “If the target of the deepfake loses, the legitimacy of the entire election will be in question,” she says. A free Manhattan would be small consolation.

Courtesy of CMU

CMU

Concern about deepfakes is driven by recent striking advances in software to generate fake audio and video—and evidence that you don’t need a PhD in artificial intelligence to use them. Some freely available tools that can swap faces in video using machine learning come with user-friendly graphical interfaces, requiring no programming skills.

A year ago, “deepfakes” was an obscure username on Reddit. It took on new meaning after that account uploaded glitchy pornographic videos that appeared to feature Hollywood stars such as Scarlett Johansson and Gal Gadot. The still-unknown person or people behind deepfakes used photos of the actors’ faces sourced online to train algorithms to generate new images in which their expressions matched those in frames of the video to be modified. Software then pasted those generated faces over the target face in every frame to create the finished clip.

The deepfakes account posted the code---built on Google’s open source TensorFlow AI software---and methodology to the world in December. Reddit and the adult site Pornhub moved to ban deepfakes, though there isn’t a reliable way to detect the videos.

The clips and the software needed to make them are now a fixture of the internet. And “deepfake” describes any fake audio or video created using machine learning.

US lawmakers on both sides of the aisle are worried about the political misuses of such technology. A white paper on regulating social media drafted by Senate Intelligence Committee vice chair Mark Warner (D-Virginia) describes deepfakes as “poised to usher in an unprecedented wave of false or defamatory content.” The document suggests changing federal law to make companies such as Facebook legally liable for defamation and other consequences of deepfake videos on their sites. Last month, a bipartisan group of congressmembers asked Director of National Intelligence Dan Coats to tell them by mid-December whether US agencies have evidence of foreign adversaries using deepfakes to harm the US, and what is being done to prevent it.

So far, there’s no public evidence of deepfake clips being used to sow political disinformation, but a series of stunts have demonstrated what that might look like. Many have involved spoofing clips of President Trump.

A startup called Lyrebird developing voice-cloning technology has promoted its wares with a fake clip of Trump saying he is considering sanctions against countries that do business with North Korea. The company uses samples of a person’s voice to train software that can generate new speech with the same intonation. Lyrebird has also made clips of Barack Obama and says it spoofed the politicians to raise awareness of the risk of malicious use of voice-cloning technology.

X content

This content can also be viewed on the site it originates from.

Belgian socialist party sp.a has used deepfakes for campaign messaging. In May, the party released a clip in which Trump, his face sometimes eerily distorted, taunts Belgium for not meeting its obligations under the Paris climate agreement.

X content

This content can also be viewed on the site it originates from.

Toward the end of the video the ersatz Trump, voiced by an actor, says the clip is fake, although Politico reported that some commenters on sp.a’s Facebook page didn’t notice and thought the message was real. Sp.a said the clip had been intended to “start a public debate” and promote a petition about climate change, not mislead anyone. The success of that high-tech stunt is debatable; nearly six months later, the petition has only 2,644 signatures.

Hwang, the Harvard-MIT researcher who took Crootof’s bet, says technical flaws like those visible in the sp.a clip demonstrate that deepfakes aren’t an immediate threat. “My theory is that it’s just not easy enough,” he says. “We won’t really see these things become a dangerous threat until it really is push-button.”

Deepfakes’ push-button moment appears to be getting closer. AI researchers and companies are improving the fidelity of fake video and audio, while open source software released by Google and others helps new techniques spread faster than ever.

The original deepfake was inspired in part by research from chipmaker Nvidia last year, in which researchers working on still images transformed house cats into cheetahs, and street scenes from day to night. They could do that without having to manually alter any images by using existing photos to teach their software to generate new, fake, images. In August of this year, Berkeley researchers generated impressive video clips of themselves mirroring the movements of professional dancers. Tutorials and open source implementations have sprung up on YouTube and Github. In September, a team at Carnegie Mellon University published a method that can map one person’s facial expressions onto another face with impressive detail. Their demo reel transposes the mannerisms of Martin Luther King Jr. onto Obama, and Obama’s onto Trump.

X content

This content can also be viewed on the site it originates from.

Some researchers are working on ways to detect and thus defend against deepfakes. The Pentagon research agency Darpa started a program in May that has reported promising results with ideas such as watching for unnatural blinking in videos. Gavin Miller, head of research at Adobe, says such defenses will wind up in a possibly unwinnable arms race with deepfake creators trying to evade them. His group has demonstrated machine-learning-powered software that makes it easy to erase or modify people or objects in video, like an automated Photoshop for moving images.

Despite the pace of progress, Hwang is still doubtful deepfakes will be a real danger even by the 2020 presidential election. “I’m more uncertain, but I’d still consider myself a skeptic,” he says.

Unsurprisingly, Crootof thinks differently. She may not need to be right for deepfakes to influence an election campaign. Cameron Hickey, who researches online disinformation at Harvard’s Shorenstein center, says the debate over deepfakes’ malicious potential is a danger in itself. “The biggest tangible threat of deepfakes so far is the allegation that any future hot mike or covert recording of Donald Trump or any other candidate would be a deepfake,” he says.


More Great WIRED Stories