‘Deep Fakes’ a Looming Crisis
An emerging technology will take "fake news" to a whole new level.
Bobby Chesney and Danielle Citron point to emerging technology that will take “fake news” to a whole new level.
Recent events amply demonstrate that false claims—even preposterous ones—can be peddled with unprecedented success today thanks to a combination of social media ubiquity and virality, cognitive biases, filter bubbles, and group polarization. The resulting harms are significant for individuals, businesses, and democracy. Belated recognition of the problem has spurred a variety of efforts to address this most recent illustration of truth decay, and at first blush there seems to be reason for optimism. Alas, the problem may soon take a significant turn for the worse thanks to deep fakes.
Get used to hearing that phrase. It refers to digital manipulation of sound, images, or video to impersonate someone or make it appear that a person did something—and to do so in a manner that is increasingly realistic, to the point that the unaided observer cannot detect the fake. Think of it as a destructive variation of the Turing test: imitation designed to mislead and deceive rather than to emulate and iterate.
As with so many technological innovations, it first broke out in a real way in pornography. An individual calling himself “deepfakes” created an entire subreddit (banned earlier this month) dedicated to pornographic movies with the faces of celebrities skillfully overlaid onto the actresses. As Chesney and Citron point out, this is awful enough:
Although the sex scenes look realistic, they are not consensual cyber porn. Conscripting individuals (more often women) into fake porn undermines their agency, reduces them to sexual objects, engenders feeling of embarrassment and shame, and inflicts reputational harm that can devastate careers (especially for everyday people). Regrettably, cyber stalkers are sure to use fake sex videos to torment victims.
But even more nefarious uses are easy to foresee:
Blackmailers might use fake videos to extract money or confidential information from individuals who have reason to believe that disproving the videos would be hard (an abuse that will include sextortion but won’t be limited to it). Reputations could be decimated, even if the videos are ultimately exposed as fakes; salacious harms will spread rapidly, technical rebuttals and corrections not so much.
And, of course, in an era where state agents are using cyber tools to influence elections and other political dynamics in adversary nations, the national security implications are mind-boggling.
Deep fakes raise the stakes for the “fake news” phenomenon in dramatic fashion (quite literally). We have already seen trolls try to create panic over fake environmental disasters, and the recent Saudi-Qatar crisis may have been fueled by a hack in which someone injected fake stories (with fake quotes by Qatar’s emir) into a Qatari news site. Now, let’s throw in realistic-looking videos and audio clips to bolster the lies. Consider these terrifying possibilities:
- Fake videos could feature public officials taking bribes, uttering racial epithets, or engaging in adultery.
- Politicians and other government officials could appear in locations where they were not, saying or doing horrific things that they did not.
- Fake videos could place them in meetings with spies or criminals, launching public outrage, criminal investigations, or both.
- Soldiers could be shown murdering innocent civilians in a war zone, precipitating waves of violence and even strategic harms to a war effort.
- A deep fake might falsely depict a white police officer shooting an unarmed black man while shouting racial epithets.
- A fake audio clip might “reveal” criminal behavior by a candidate on the eve of an election.
- A fake video might portray an Israeli official doing or saying something so inflammatory as to cause riots in neighboring countries, potentially disrupting diplomatic ties or even motivating a wave of violence.
- False audio might convincingly depict U.S. officials privately “admitting” a plan to commit this or that outrage overseas, exquisitely timed to disrupt an important diplomatic initiative.
- A fake video might depict emergency officials “announcing” an impending missile strike on Los Angeles or an emergent pandemic in New York, provoking panic and worse.
Note that these examples all emphasize how a well-executed and well-timed deep fake might generate significant harm in a particular instance, whether the damage is to physical property and life in the wake of social unrest or panic or to the integrity of an election. The threat posed by deep fakes, however, also has a long-term, systemic dimension.
The spread of deep fakes will threaten to erode the trust necessary for democracy to function effectively, for two reasons. First, and most obviously, the marketplace of ideas will be injected with a particularly-dangerous form of falsehood. Second, and more subtly, the public may become more willing to disbelieve true but uncomfortable facts. Cognitive biases already encourage resistance to such facts, but awareness of ubiquitous deep fakes may enhance that tendency, providing a ready excuse to disregard unwelcome evidence. At a minimum, as fake videos become widespread, the public may have difficulty believing what their eyes (or ears) are telling them—even when the information is quite real.
This is going to be next to impossible to combat. And it doesn’t help, to say the least, that the President himself is constantly trying to undermine the credibility of legitimate media—and even his own law enforcement and intelligence agencies—for his own ends.