2024 Elections

‘An arms race forever’ as AI outpaces election law

The start of the primaries has already shown the challenge of policing AI in elections.

President Joe Biden speaks at a campaign event in North Las Vegas, Nev.

Dean Phillips, Joe Biden’s long-shot Democratic primary challenger, faced a crushing defeat in New Hampshire last month, losing to Biden by an enormous margin.

An AI version of Phillips was shut down just as easily.

In the run-up to the New Hampshire vote, a super PAC supporting Phillips used AI to create a conversational bot to answer voters’ questions. It was a clunky effort that breached OpenAI’s rules, and the company quickly quashed it.

Another AI scheme in that primary went unsolved for weeks: Deepfaked robocalls of Biden told voters not to show up at the polls in an apparent voter suppression campaign. The Federal Communications Commission issued a cease-and-desist letter Tuesday to a Texas company that allegedly carried the calls on its phone network.

The two contrasting AI operations threw into relief where the most serious disruptions to political campaigns are likely to come from: not from above-board groups that disclose their electioneering activities, but from AI outlaws bent on sowing chaos and disinformation.

“It’s going to be very difficult to regulate,” said Rachel Orey, senior associate director of the Bipartisan Policy Center Elections Project. “Everyone talks about OpenAI and ChatGPT, but that’s unlikely where the most pernicious use cases are going to come from. They’re going to come from unregulated open-source technology.”

AI technology has galloped far ahead of laws, regulations and industry norms.

Congress has not passed a single law regulating AI usage in elections. Federal Election Commission Chair Sean Cooksey has said the commission will get to AI rulemaking by early summer. Even then, it is not clear the FEC has jurisdiction over AI in campaigning, and what regulation would be allowed under the First Amendment.

Some states have passed laws regulating AI-generated content in campaign materials. But lawmakers in Washington acknowledge they are falling behind, including Sen. Amy Klobuchar (D-Minn.) who sponsored a bipartisan bill to ban the use of AI-generated deepfakes in elections.

“The problem is not going to go away on its own,” she said Tuesday in an address to a panel sponsored by Microsoft. “We can’t sit on the sidelines while AI continues to advance without any rules of the road.”

So far, the most effective check on AI adoption in campaign operations may be nerves. The American Association of Political Consultants’ board of directors unanimously condemned the use of deepfakes in political advertising last year, and the tech is showing up largely as a novelty in campaigns.

In California’s 16th Congressional District, Democrat Peter Dixon used AI to depict his life story in a campaign launch video.

Dixon said in an interview he hopes his “lighthearted” video will help establish “ethical norms” of AI in campaigns. “Novel technologies tend to be used by scrappy outsiders who are trying to disrupt the status quo,” he said.

A super PAC supporting Ron DeSantis’ failed presidential bid used AI to clone Donald Trump’s voice reading a social media post. In December, a Pennsylvania Democrat deployed an AI “volunteer” to call voters for her congressional campaign.

People are “very skittish” about using AI for creative production in part because the ethics are murky, said Republican digital strategist Eric Wilson. “The last thing a campaign wants to do is to put together an ad and it gets attention for all the wrong reasons.”

AI in ads will likely expand as the campaign cycle accelerates. Maya Hutchinson, a Democratic strategist working on a startup that uses AI to create ads, said the technology can help tailor messages to target different groups “in a very cautious and thoughtful way.”

But for now, 2024 is the “AI experimentation cycle,” said Emily Karrs, creative director at Republican firm IMGE. She said strategists are aware that voters are leery of AI-generated deception.

“A lot of the cool generative AI stuff feeds right into one of the top criticisms of politicians — that they’re fake,” she said.

Recent elections overseas offer ample evidence of the perils of AI.

In Slovakia, fraudsters exploited a hole in Meta’s platform policies and posted an AI-generated audio deepfake of two Slovakian politicians discussing election rigging. The content went live during a 48-hour moratorium ahead of parliamentary election polls opening — complicating efforts to debunk it.

In Argentina, Sergio Massa’s presidential campaign reportedly used an AI model to generate election posters. Ben Brooks, global policy director for the AI developer Stability AI, told POLITICO the company only found out when it read the news.

Once created, AI-generated content roams free on platforms that have few restrictions. Meta announced Tuesday that it will start labeling images on Facebook, Instagram and Threads that it detects were generated by AI.

Microsoft rolled out a set of promises in November, including content watermarks and offering help to political campaigns. Google said it will deploy AI to detect and remove AI-generated misinformation on its platforms, which include YouTube. It also touted its own watermark for AI-generated images and audio.

But the tech industry has sputtered on enforcing policies before, even on issues of broad consensus like kids’ online safety.

By Tuesday, a top Microsoft official tried to keep expectations realistic.

“The reality is the technology to create deepfakes is going to keep advancing, just as we’re advancing on the technology to catch it,” said Ginny Badanes, senior director of Microsoft’s Democracy Forward initiative. “It’s going to be an arms race forever.”