Deepfakes and Election Disinformation

A long-predicted threat has emerged.

AP (“Election disinformation takes a big leap with AI being used to deceive worldwide“):

Artificial intelligence is supercharging the threat of election disinformation worldwide, making it easy for anyone with a smartphone and a devious imagination to create fake – but convincing – content aimed at fooling voters.

It marks a quantum leap from a few years ago, when creating phony photos, videos or audio clips required teams of people with time, technical skill and money. Now, using free and low-cost generative artificial intelligence services from companies like Google and OpenAI, anyone can create high-quality “deepfakes” with just a simple text prompt.

A wave of AI deepfakes tied to elections in Europe and Asia has coursed through social media for months, serving as a warning for more than 50 countries heading to the polls this year.

“You don’t need to look far to see some people … being clearly confused as to whether something is real or not,” said Henry Ajder, a leading expert in generative AI based in Cambridge, England.

The question is no longer whether AI deepfakes could affect elections, but how influential they will be, said Ajder, who runs a consulting firm called Latent Space Advisory.

As the U.S. presidential race heats up, FBI Director Christopher Wray recently warned about the growing threat, saying generative AI makes it easy for “foreign adversaries to engage in malign influence.”

I first wrote about the deepfake (then “deep fake”) phenomenon back in February 2018, calling it “a looming crisis” that “is going to be next to impossible to combat.” Not surprisingly, the technology has evolved considerably in the six years since.

With AI deepfakes, a candidate’s image can be smeared, or softened. Voters can be steered toward or away from candidates — or even to avoid the polls altogether. But perhaps the greatest threat to democracy, experts say, is that a surge of AI deepfakes could erode the public’s trust in what they see and hear.

Some recent examples of AI deepfakes include:

— A video of Moldova’s pro-Western president throwing her support behind a political party friendly to Russia.

— Audio clips of Slovakia’s liberal party leader discussing vote rigging and raising the price of beer.

— A video of an opposition lawmaker in Bangladesh — a conservative Muslim majority nation — wearing a bikini.

The novelty and sophistication of the technology makes it hard to track who is behind AI deepfakes. Experts say governments and companies are not yet capable of stopping the deluge, nor are they moving fast enough to solve the problem.

As the technology improves, “definitive answers about a lot of the fake content are going to be hard to come by,” Ajder said.

My suspicion is that this will mostly serve to reinforce existing beliefs and predilections and, regardless, be most impactful among the most credulous. But the larger point is that this will make people even more skeptical that there is such a thing as “truth” or a credible information source.

Some AI deepfakes aim to sow doubt about candidates’ allegiances.

In Moldova, an Eastern European country bordering Ukraine, pro-Western President Maia Sandu has been a frequent target. One AI deepfake that circulated shortly before local elections depicted her endorsing a Russian-friendly party and announcing plans to resign.

Officials in Moldova believe the Russian government is behind the activity. With presidential elections this year, the deepfakes aim “to erode trust in our electoral process, candidates and institutions — but also to erode trust between people,” said Olga Rosca, an adviser to Sandu. The Russian government declined to comment for this story.

China has also been accused of weaponizing generative AI for political purposes.

In Taiwan, a self-ruled island that China claims as its own, an AI deepfake gained attention earlier this year by stirring concerns about U.S. interference in local politics.

The fake clip circulating on TikTok showed U.S. Rep. Rob Wittman, vice chairman of the U.S. House Armed Services Committee, promising stronger U.S. military support for Taiwan if the incumbent party’s candidates were elected in January.

Wittman blamed the Chinese Communist Party for trying to meddle in Taiwanese politics, saying it uses TikTok — a Chinese-owned company — to spread “propaganda.”

A spokesperson for the Chinese foreign ministry, Wang Wenbin, said his government doesn’t comment on fake videos and that it opposes interference in other countries’ internal affairs. The Taiwan election, he stressed, “is a local affair of China.”

It’s hardly surprising that Russia and China are leading the way here. Disinformation has long been at the heart of their governing culture. But we’ll almost certainly see this infect the 2024 U.S. elections. Indeed, it’s already started to a very modest degree:

Audio-only deepfakes are especially hard to verify because, unlike photos and videos, they lack telltale signs of manipulated content.

In Slovakia, another country overshadowed by Russian influence, audio clips resembling the voice of the liberal party chief were shared widely on social media just days before parliamentary elections. The clips purportedly captured him talking about hiking beer prices and rigging the vote.

It’s understandable that voters might fall for the deception, Ajder said, because humans are “much more used to judging with our eyes than with our ears.”

In the U.S., robocalls impersonating U.S. President Joe Biden urged voters in New Hampshire to abstain from voting in January’s primary election. The calls were later traced to a political consultant who said he was trying to publicize the dangers of AI deepfakes.

The report goes on to note that the technique will be especially powerful in countries with relatively uneducated populations, noting particular concern about upcoming elections in India and Indonesia.

Not surprisingly, the EU is being more proactive on this than we are:

The European Union already requires social media platforms to cut the risk of spreading disinformation or “election manipulation.” It will mandate special labeling of AI deepfakes starting next year, too late for the EU’s parliamentary elections in June. Still, the rest of the world is a lot further behind.

The world’s biggest tech companies recently — and voluntarily — signed a pact to prevent AI tools from disrupting elections. For example, the company that owns Instagram and Facebook has said it will start labeling deepfakes that appear on its platforms.

But deepfakes are harder to rein in on apps like the Telegram chat service, which did not sign the voluntary pact and uses encrypted chats that can be difficult to monitor.

Naturally, there’s some backlash:

Some experts worry that efforts to rein in AI deepfakes could have unintended consequences.

Well-meaning governments or companies might trample on the sometimes “very thin” line between political commentary and an “illegitimate attempt to smear a candidate,” said Tim Harper, a senior policy analyst at the Center for Democracy and Technology in Washington.

I’m skeptical on this front. Parody is a legitimate form of political speech but using deepfake technology for that purpose would seem to cross a line. We shall see.

Major generative AI services have rules to limit political disinformation. But experts say it remains too easy to outwit the platforms’ restrictions or use alternative services that don’t have the same safeguards.

Even without bad intentions, the rising use of AI is problematic. Many popular AI-powered chatbots are still spitting out false and misleading information that threatens to disenfranchise voters.

And software isn’t the only threat. Candidates could try to deceive voters by claiming that real events portraying them in an unfavorable light were manufactured by AI.

“A world in which everything is suspect — and so everyone gets to choose what they believe — is also a world that’s really challenging for a flourishing democracy,” said Lisa Reppell, a researcher at the International Foundation for Electoral Systems in Arlington, Virginia.

More threats to our democracy we don’t need.

FILED UNDER: Democracy, Science & Technology, , , , , , , , , , , , , , , , , ,
James Joyner
About James Joyner
James Joyner is Professor and Department Head of Security Studies at Marine Corps University's Command and Staff College. He's a former Army officer and Desert Storm veteran. Views expressed here are his own. Follow James on Twitter @DrJJoyner.

Comments

  1. DrDaveT says:

    Tangent: what’s your source for the image at the head of the article? I’ve been looking for an image exactly like that to use in some training I’m developing…

    ReplyReply
  2. Sleeping Dog says:

    …will be especially powerful in countries with relatively uneducated populations…

    Don’t leave out the US!

    I agree it will for the most part reinforce existing biases and if that is where it ends, we’ll be alright. But there are large subsections of the press on the left and right that will use whatever means to further the narrative that they prefer and the mainstream press is so wrap up in its fairness Puritanism that they won’t be able to call out fiction when they see it.

    ReplyReply
    6
  3. DrDaveT says:

    But the larger point is that this will make people even more skeptical that there is such a thing as “truth” or a credible information source.

    And they will be right to be skeptical. What’s the point of bodycams on police if you can’t trust the video to show what actually happened? Given that almost no one hears a candidate speak in person, how can you know what they really said if the video can’t be trusted? And just because you think the Podunk Daily Clarion is a reliable source, how can you be sure that the material that seems to be from Daily Clarion really is?

    I think people are underestimating the dangers, if anything. This could destroy society if left unchecked. It’s an untraceable weapon of mass destruction.

    ReplyReply
    7
  4. James Joyner says:

    @DrDaveT: Looks like Shutterstock.

    ReplyReply
  5. MarkedMan says:

    Back in the early days of the internet, before the Web, there were technologists claiming that it would be the end of news media because people could go directly to the source and bypass “the gatekeepers”. At the time I thought it ridiculous, and a total misunderstanding of how people work. Curation, editing and moderating is even more important when information is available from all kinds of inverted sources. Deep fakes just make this more true.

    ReplyReply
    4
  6. DrDaveT says:

    @James Joyner: Thanks!

    ReplyReply
  7. Joe says:

    What if this whole concern about deepfakes isn’t actually true – it’s just a . . . deepfake?

    ReplyReply
    2
  8. Kathy says:

    Aren’t deepfakes pretty much either fraud or libel if they are not clearly and distinctly identified as fakes?

    ReplyReply
    2
  9. CSK says:

    In terms of using deepfakes for “illegitimate attempt[s] to smear a candidate,” Trump’s been claiming this has been done to him since 2016. Remember the pussy tape? A few days after apologizing for it, he was speculating that it was a fake.

    ReplyReply
    2
  10. Kazzy says:

    @Kathy: IANAL, but I imagine that would require the creator to actually say something about the image.

    I could make a crude drawing of Donald Trump strangling a bald eagle and would have zero legal culpability. I could make a a photorealistic painting of that image and nothing would change. So why would creating a deepfake showing that violate any law? Now, if I held that image up and said, “Donald Trump strangled a bald eagle and here is the proof!” well, now I’ve committed libel/slander. But if I send that image around the internet and people think it’s real and say, “Donald Trump strangled a bald eagle and here is the proof!” well, they aren’t knowingly saying something untrue and all I did was make some art.

    ReplyReply
    2
  11. Just nutha ignint cracker says:

    @Kazzy: Even then, IIRC, if the plaintiff is a famous person, he would need to establish malice on your part–with all the concomitant problems of proving, for example, a conspiracy. Easy accusation to make, harder to prove.

    For example, if you testified in court that you did it so that people wouldn’t voter for him, your action might be judged to be merely political but not malicious.

    ReplyReply
  12. Kathy says:

    @Kazzy:

    A deepfake video has a person say and do things they never done or said. That’s the claim being made by such things implicitly, and for a rather obvious purpose.

    Of course, laws could be passed to regulate such things. But, well, you know.

    ReplyReply
  13. Just nutha ignint cracker says:

    @Kathy: And the laws will only inhibit the people who are inclined to obey laws. Sure, you’ll be able to punish the scofflaws (well, maybe anyway), but only after the damage is done (and if you identify them).

    ReplyReply
    1

Speak Your Mind

*