Stopping Deepfake Porn

A growing scourge may be next to impossible to stop.

YahooNews senior editor Mike Beberes (“Taylor Swift isn’t the only victim of AI porn. Can the spread of deepfake nudes be stopped?“):

Fake nude pictures of celebrities are not a new phenomenon, but thanks to advanced and widely available artificial intelligence tools, it is now possible to quickly produce high-quality images or videos featuring anyone’s likeness in any scenario imaginable. While a lot of attention has been paid to how deepfakes could be used to spread misinformation, research shows that 98% of all AI-generated videos online are pornographic and nearly all of the individuals targeted are women.

Celebrities like actresses, musicians and social media influencers are most frequently featured in deepfake porn, but there are many examples of average women and girls also being targeted. Last year, administrators at a New Jersey high school discovered that some students had used AI to create fake nude images of more than 30 of their classmates. Similar incidents have been reported at other schools in the U.S. and abroad.

It’s illegal to share real nude images of someone without their consent in almost every state, especially if they’re a minor. But the laws around artificial porn are much weaker — even though the harm caused to victims can be the same whether the content is fake or genuine. There is no federal law concerning deepfake porn and only about 10 states have statutes banning it. Most social media sites prohibit AI porn, but the scale of the problem and lax moderation mean it can still be rampant on their platforms. One post featuring Swift deepfakes was live on X, formerly Twitter, for 17 hours and gathered more than 45 million views before it was taken down.

Like so many other harmful things online, it may be impossible to completely eradicate AI porn. But experts say there are plenty of things that can be done to make it dramatically less prevalent and limit the damage it causes.

Several bills have been proposed in Congress that would create nationwide protections against deepfake porn, either by creating new legal penalties for those who create or share it or by giving victims new rights to seek damages after they’ve been targeted. Supporters of these plans say that, even if the new laws didn’t sweep up every bad actor, they would lead to some high-profile cases that would scare others away from creating deepfakes.

Outside of new laws, many tech industry observers argue that the public needs to put pressure on the various mainstream entities that allow people to create, find, spread and profit from AI porn — including social media platforms, credit card companies, AI developers and search engines. There’s also hope that fear of lawsuits from someone like Swift could create enough financial risk that these groups will begin taking deepfakes more seriously.

At the same time, some experts make the case that the war against AI porn has effectively already been lost. In their view, the technical problem of finding and blocking so many deepfakes is basically unsolvable and even the most aggressive new laws or policies will only capture a tiny fraction of the flood of fake explicit content that’s out there.

This is a classic case of technology advancing faster than lawmakers and regulators can understand it. While I’ve written about the phenomenon a couple of times, going back to a post almost exactly six years ago (“‘Deep Fakes’ a Looming Crisis“), I certainly don’t claim any expertise. But even then, I understood “This is going to be next to impossible to combat.”

Writing at Rolling Stone a couple weeks back, right after the Swift deepfakes started circulating, Miles Klee elaborated (“Swifties Want a Massive Crackdown on AI-Generated Nudes. They Won’t Get One“):

Swift’s superstardom, signs of Congressional support, and a highly motivated stan army would seem to promise powerful momentum for any attempt to eradicate these nonconsensual AI nudes. But that crusade will come up against a thorny and forbidding set of complications, according to civil liberty experts — no matter how fired up the Swifties are.

“They’re a huge force, and they advocated,” says Katharine Trendacosta, director of policy and advocacy at the Electronic Frontier Foundation, a nonprofit focused on internet users’ privacy and free expression. “But they did that after Ticketmaster, and we somehow still have Ticketmaster,” she adds, referring to Swifties savaging the company as a price-gouging monopoly (and in some cases even filing lawsuits) due to its mishandling of ticket sales for Swift’s Eras Tour. In the AI fight, too, Trendacosta says, we’ll see “the unstoppable movement of the Swifties versus the immovable object that is the legislature,” a Congress slow to respond to “basically anything.”

But it’s not just that Congress is broken, mostly because Republicans refuse to vote for even things they support if it might help Democrats, but because crafting legislation here is fraught with all manner of technical and ideological challenges.

Reform and government oversight, however, is difficult, Trendacosta says, not least because legislators’ ideas of how to combat deceptive AI have been all backwards. The EFF, for instance, opposes the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act, introduced by Reps. María Elvira Salazar of Florida and Madeleine Dean of Pennsylvania earlier this month. Why? Because in seeking to guarantee “individual property rights in likeness and voice,” the proposed law would broaden publicity rights — that is, your right to not have a company falsely claim you endorse their product — to any kind of digital representation, “from pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more,” as EFF notes in a statement on the bill. Other critics have also warned of the chilling effect this would have on digital free speech. Under its expansive language, sharing a Saturday Night Live clip that features an impression of Swift would potentially be a criminal offense.

That strikes me as more than a bit of a stretch. But, for example, parody is protected speech and, broadly speaking, so is most consensual pornography.

“I know several legislators are attempting to either write new bills, or adjust existing laws around revenge porn to prosecute it, but much of this is incredibly new,” says Mike Stabile of the Free Speech Coalition, the trade association of the U.S. adult entertainment industry. “As detestable as [nonconsensual AI porn] might be, it’s still technically speech, and efforts to curtail it or ban it may face hurdles in the courts.”

“In the short term, platforms are the best tool in blocking widespread distribution,” Stabile says, adding that adult sites including Pornhub and Clips4sale “have been ahead of the pack on this, and banned deepfakes and revenge porn years ago.” Of course, these rules depend on enforcement — and that, according to Trendacosta, can be an insurmountable task in itself.

“The problem we often see with with the largest companies, like on Facebook or on Google or even Twitter, which isn’t even that big, is that the enforcement is really selective because they have so much content,” she says. “It’s actually just impossible.” Incidents like the sudden proliferation of AI-spawned illustrations of Swift in sexual scenes will draw the most focus and garner a relatively quick response, whereas “the already victimized or marginalized” receive little help, if any, Trendacosta says. The outcry over Swift’s admittedly terrible situation has far outstripped, for example, concern for kids whose pictures are fed into AI models to create child sex abuse material.

Plus, Trendacosta points out, there are practical limits to the engineering side of the equation. People want to believe that “if the problem is the technology then the technician should be able to fix it by building a new technology,” she says, but this doesn’t get to the systemic roots of the problem. The Microsoft software used to create pornographic images of Swift has guardrails meant to prevent exactly this kind of misuse; bad actors found ways around them. Neither can we completely rely on filtering tech to catch platform violations. “Machines don’t understand context,” Trendacosta says. “If I draw a politician semi-nude to make fun of him, that’s protected political speech. Machines don’t know that.”

Now, frankly, if we could solve the potentially massive problem of deepfakes of celebrities and ex-wives and girlfriends proliferating, I would be willing to sacrifice semi-nude politician parodies. Especially given that most of our politicians nowadays are in their 80s. But, yes, setting parameters is likely incredibly challenging.

So while it’s easy to establish a general consensus that it’s wrong to disseminate AI porn that victimizes a pop star, the question of how we could prevent it while guaranteeing the same protections for average citizens — and preserving First Amendment rights — is very much unsettled. On the one hand, our technologies and the human teams behind them aren’t up to the task. On the other, government overcorrection might leave us with heavily restricted social networks that close off legitimate forms of commentary.

The lines here are sufficiently blurry that I don’t have a strong sense of what’s at stake on the speech side. Again, if it’s simply a matter of more sophisticated versions of images of Donald Trump making out with Vladimir Putin, it’s a sacrifice I’m willing to make. The balance on the other end is rather stark, having moved well beyond the theoretical.

As Wired‘s Matt Burgess noted (“Deepfake Porn Is Out of Control“) last October,

A new analysis of nonconsensual deepfake porn videos, conducted by an independent researcher and shared with WIRED, shows how pervasive the videos have become. At least 244,625 videos have been uploaded to the top 35 websites set up either exclusively or partially to host deepfake porn videos in the past seven years, according to the researcher, who requested anonymity to avoid being targeted online.

Over the first nine months of this year, 113,000 videos were uploaded to the websites—a 54 percent increase on the 73,000 videos uploaded in all of 2022. By the end of this year, the analysis forecasts, more videos will have been produced in 2023 than the total number of every other year combined.

These startling figures are just a snapshot of how colossal the issues with nonconsensual deepfakes has become—the full scale of the problem is much larger and encompasses other types of manipulated imagery. A whole industry of deepfake abuse, which predominantly targets women and is produced without people’s consent or knowledge, has emerged in recent years. Face-swapping apps that work on still images and apps where clothes can be “stripped off a person” in a photo with just a few clicks are also highly prominent. There are likely millions of images being created with these apps.

“This is something that targets everyday people, everyday high school students, everyday adults—it’s become a daily occurrence,” says Sophie Maddocks, who conducts research on digital rights and cyber-sexual violence at the University of Pennsylvania. “It would make a lot of difference if we were able to make these technologies harder to access. It shouldn’t take two seconds to potentially incite a sex crime.”

[…]

The research also identified an additional 300 general pornography websites that incorporate nonconsensual deepfake pornography in some way. The researcher says “leak” websites and websites that exist to repost people’s social media pictures are also incorporating deepfake images. One website dealing in photographs claims it has “undressed” people in 350,000 photos.

Measuring the full scale of deepfake videos and images online is incredibly difficult. Tracking where the content is shared on social media is challenging, while abusive content is also shared in private messaging groups or closed channels, often by people known to the victims. In September, more than 20 girls aged 11 to 17 came forward in the Spanish town of Almendralejo after AI tools were used to generate naked photos of them without their knowledge.

Further, as his colleague Megan Farokhmanesh noted last March (“The Debate on Deepfake Porn Misses the Point“), the focus on “fake” may minimize the harm:

It’s not enough that some viewers can tell the media is fake. The consequences are real. Victims are harassed with explicit video and images made in their semblance, an experience some liken to assault. Repeated harassment with these videos or images can be traumatizing. Friends and family don’t always have the online literacy to understand that the media has been falsified. Streamers watch as their personal and professional brands are polluted through a proliferation of explicit content created without their knowledge or consent.

Arguing that deepfakes can’t be harmful because they’re not “real” is as reductive as it is false. It’s ignorant to proclaim they’re no big deal while the people impacted are telling you they are. Deepfakes can inflict “the same kinds of harms as an actual piece of media recorded from a person would,” says Cailin O’Connor, author of The Misinformation Age and a professor at the University of California, Irvine. “Whether or not they’re fake, the impression still lasts.”

And, for all intent and purposes, once something is on the Internet, it’s forever.

[R]emoving any content from the internet is a Sisyphean task, even under the best of circumstances. Blaire, who had vowed to sue the deepfake creator responsible, learned from multiple lawyers that she’s unable to do so without the help of federal legislation. Only three states—California, Texas, and Virginia—have laws in place to specifically address deepfakes, and these are shaky at best. Section 230 absolves a site’s owner of legal liability from users posting illicit content. And as long as someone acts in “good faith” to remove content, they’re essentially safe from punishment—which may explain why the page’s owner posted an apology in which they call the impact of their deepfakes “eye opening.”

Laws and regulations dealing with issues like these are impossible to enact with any speed, let alone against the lightning-fast culture of the internet. “The general picture we ought to be looking at is something like the equivalent of the FDA or the EPA,” says O’Connor, “where you have a flexible regulatory body, and various interests online have to work with that body in order to be in compliance with certain kinds of standards.” With that kind of system in place, O’Connor believes progress could be made. “The picture that I think we should all be forwarding is one where our regulation is as flexible and able to change as things on the internet are flexible and able to change.”

Alas, there’s a very good chance the Supreme Court will invalidate that entire concept—Congress delegating regulatory decisions to executive agencies with extreme deference from the judiciary—moot in this coming term.

Again, I lack the expertise to have real solutions to offer here. But we’re well beyond the stage where this is merely a theoretical problem. Faked videos are doing real harm to real people.

FILED UNDER: Science & Technology, , , , , , , , , , , , , , , , , , , ,
James Joyner
About James Joyner
James Joyner is Professor and Department Head of Security Studies at Marine Corps University's Command and Staff College. He's a former Army officer and Desert Storm veteran. Views expressed here are his own. Follow James on Twitter @DrJJoyner.

Comments

  1. Kathy says:

    Nuke the whole Earth from orbit. It’s the only way to make sure.

    A lot of things are impossible. we will never eradicate murder, robbery, assault, fraud, embezzlement, counterfeiting, money laundering, and a whole raft of other crimes and violations of individual rights. If we give up trying because we cannot end it all, we’ll get a lot more.

    We will also never cure every disease nor treat every condition. And that’s enough example.

    But if you want to end crime, or for that matter disease, war, and natural disasters, you need to end life. That we can do.

    9
  2. Andy says:

    I think we may need to rethink the legal ownership of personal information to include likenesses and give individuals more control/authority. The problem, of course, is that quickly runs up against the 1st amendment in all kinds of contexts. I’m not sure how to resolve the quandary, but it’s an important conversation to have.

    7
  3. MarkedMan says:

    If you think about it, talented portrait artists have always been able to make deep fake porn, at least as far as still images are concerned. Photoshop opened up that possibility to a much wider audience but still required training and knowhow. But generative AI opens it up to anyone, and the speed at which it can render the deep fakes mean it can be used for video as well as still images.

    2
  4. de stijl says:

    The MAGA obsession about Taylor Swift confounds me. Why would anyone care? A pop star *might* endorse not Trump.

    A popular millennial is inclusive and vaguely liberal? OMG! That’s unacceptable!

    The fixation is really weird, more than a bit misogynistic, and very, very creepy. I don’t give a shit who she’s dating. That is def not my business. Some folks are theorizing a vast conspiracy around her relationship.

    Why would anyone care about that? All I know is it ain’t the Vikings in the Superbowl (again, forever) so I don’t give a single fuck.

  5. steve says:

    The claims by the defenders of AI have been that you can use AI to stop the harms. This would be a good case to see if that is true. Use AI to spot these and immediately stop them when they appear on social media, use AI to determine if fake and then track down who made the deepfake and who posted it.

    Steve

    3
  6. Grumpy Realist says:

    Guess women will have to start protesting by wearing niqabs, no?

    Either figure out a way to control this, or stop seeing women’s faces altogether. Your pick.

  7. Michael Reynolds says:

    Tech people, is it possible to require AI’s to embed code identifying an image as AI generated?

    2
  8. Bill Jempty says:

    @Kathy:

    Nuke the whole Earth from orbit. It’s the only way to make sure.

    That’s it man. Game over, man, game over.

  9. Gustopher says:

    The Microsoft software used to create pornographic images of Swift has guardrails meant to prevent exactly this kind of misuse; bad actors found ways around them.

    That sounds like a Microsoft problem.

    If we held companies liable for the harm they caused, they would find a way to not cause that harm, either by locking down their products, or by simply not offering products that are ripe for abuse.

    If we restrict access to pseudoephedrine because it can be used to make meth, we can restrict access to AI models that can be used to make non-consensual AI porn.

    Obviously, first we would have to define what is and isn’t harmful, and navigate the first amendment concerns, but when it comes to implementation, just smack the shit out of the companies making the tools.

    1
  10. just nutha says:

    @Michael Reynolds: Not a tech person, but it’s possible to require anything legislators will legislate. Enforcement is always the sticking point.

    2
  11. MarkedMan says:
  12. MarkedMan says:

    @Gustopher:

    That sounds like a Microsoft problem

    That strikes me as a big ask. While it may be technically possible, holding manufacturers of creative tools responsible for what is created strikes me as a very dangerous place. What if a murderer writes a manifesto in Microsoft Word? Or somebody shoots child porn with a Nikon camera?

    Dictating a digital watermark may make sense. But holding them responsible for what bad actors use their product for is a real stretch.

  13. Gustopher says:

    Reform and government oversight, however, is difficult, Trendacosta says, not least because legislators’ ideas of how to combat deceptive AI have been all backwards. The EFF, for instance, opposes the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act, introduced by Reps. María Elvira Salazar of Florida and Madeleine Dean of Pennsylvania earlier this month. Why? Because in seeking to guarantee “individual property rights in likeness and voice,” the proposed law would broaden publicity rights — that is, your right to not have a company falsely claim you endorse their product — to any kind of digital representation, “from pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more,” as EFF notes in a statement on the bill. Other critics have also warned of the chilling effect this would have on digital free speech. Under its expansive language, sharing a Saturday Night Live clip that features an impression of Swift would potentially be a criminal offense.

    The EFF’s concerns are valid, but the bill could be tightened to address that. Something along the lines of the “confusion in the marketplace” standard that is used for trademark disputes. If a reasonable person cannot distinguish the impersonation, it crosses the line into impermissible.

    SNL, for instance, has a long history of overly broad, exaggerated impersonations. Whatever Baldwin they found was clearly not actually Donald Trump, he just acted out many of the mannerisms.

    This would leave the door open to animated AI porn in the style of the early 1800s Impressionist painters, but honestly, that sounds like the type of steam punk inspired dystopia that would be weirdly charming, or at least absurd.

    “And here we have AI-Monet’s ‘Taylor Swift Being Boned By Kermit On Water Lillies’”

  14. Sleeping Dog says:

    @Michael Reynolds:

    I would be and when you think about it, Digital Rights Management controls are used to address a similar problem. Whoever developed the generative AI SW would need to embed a signature that ID’d the image or any AI output. That signature would need to be incorruptible, meaning the AI output is destroyed if the signature is tampered with.

    1
  15. Gustopher says:

    @MarkedMan: There’s a difference between creating a car that someone uses to run over schoolchildren, and creating a self-driving car that someone can instruct to run over schoolchildren by telling it to disregard previous instructions.

    We mandate safety features in products all the time, and hold manufacturers liable when they fail. We restrict access to harmful substances.

    AI models are not some special, unregulatable thing. Just because Tech Bros think they should be able to do whatever they want with no regards to consequences, it doesn’t make it so.

    They’re a product like any other. Pseudoephedrine may be a good analogy — a product with good uses, but which has enough dangerous uses that we require some pretty significant safeguards.

    Or opioids, where states are suing the manufacturers for encouraging the overprescribing.

  16. Matt Bernius says:

    @just nutha:

    Not a tech person, but it’s possible to require anything legislators will legislate. Enforcement is always the sticking point.

    This. Building in meta data is easy. Forcing systems to do it is all but impossible.

    @steve:
    While I’m guessing there is an invisible /s (for sarcastic) at the end of that, let me just say that, in reality, that would be a terrible idea and lead to more problems than I care to count.

  17. Gustopher says:

    @Sleeping Dog: DRM is pretty easily stripped by a determined party, and watermarks can be corrupted, if not removed, with minimal noticeable effect.

    And for reducing piracy, that’s good enough — there’s constant pressure on each side, and the equilibrium is that it is enough of a hassle that relatively few people pirate content compared to buying or leasing it.

    But, it solves a different problem — steering users into paying for content. (And sometimes DRM is so intrusive it is easier to pirate content)

    Lately, it’s been more effective to go after sites that are hosting/streaming copyrighted content.

    Sites are sued repeatedly until they start monitoring what is on the platform, and then there is usually a way for the content sellers to register the content they want to protect, the platform fingerprints it, and checks user content for those fingerprints in some fuzzy fingerprint matching algorithm.

    This ends up pushing pirated content out of the big sites, and once again making it more of a hassle for most people to get access to than it on a shady Russian website than the cost of just buying/leasing it.

    (CSAM — child sexual assault material — can be thought of as copyrighted content here. The copyright checking comes from making the tools to identify CSAM scale better for more content*)

    AI generated content means that there isn’t a relatively small, well known set of images/videos to check for.

    That said, we can use AI to pretty reliably identify (white) people. We know if a video has Taylor Swift. We can also tell if there are naked people. And taken together, platforms could identify naked Taylor Swift and block that with decent accuracy.

    This requires either blocking all nudes, or having a list of who is allowed to be nude. And it also catches non-AI generated nudes of those people (which is fine). And it’s error prone. And it helps Taylor Swift, but not the person down the street who didn’t think to have their features uploaded for checking this way.

    And, since there is a problem with AI generated porn of normal people being used to harass and blackmail people, that means that we have to at least look at the production.

    ——-
    *: this information is about a decade old, and things may have changed a lot since then in CSAM detection. I worked with people adjacent to the group that handled this. There was a constantly-updated database of fingerprints. I would like to hope there is a less reactive system now that is flagging likely CSAM that it doesn’t know about.

    I also don’t understand the fuzzy fingerprint matches beyond knowing that they work better than I would have expected, but not as well as others hoped.

    Plain content piracy is easier to detect, as there’s less of an effort to crop, or invert the image to escape detection.

    1
  18. Gustopher says:

    @Matt Bernius: I think it’s likely that output from current AI could be identified by AI. Aside from the various 6 fingers, and extra arms, there are likely other tells along edges that are less visible to humans but have a lot of artifacts.

    And, given the amount of AI stuff out there, and the risks of training AI on AI generated content (it produces gibberish, not Skynet), there are definitely people working on improving detection.

  19. Andy says:

    @Michael Reynolds:

    At the pace things are progressing, in 10 years, a home PC could have enough computing power to do AI as long as it has a dataset it can pull from.

    1
  20. JKB says:

    Never fear, the Anons of 4Chan have deployed an AI that puts clothes on the naked influencers online photos. A hue and cry has arose condemning this modesty application of technology. “Don’t shroud me, dude”

    #DignifAI

  21. Kathy says:

    I wonder if we can fight Artificial Intelligence with Genuine Stupidity.

    3
  22. Kevin says:

    I don’t have an answer. I have two daughters, seven and five, and I’m worried about what could happen to them at school. I was an idiot teenage boy; I hate to imagine what would have happened if this technology had been available to us. And I empathize with the people who have had to deal with this shit, whether fake or real.

    On the other hand, I suspect a lot of the harm could be dealt with via existing law. There aren’t good technical answers; no one can enforce some sort of embedded information, and in many cases, you can’t really tell the difference between a lot of generated images and real images. I mean, yes, three arms, that sort of thing is obvious, but absent that, it can be hard to know for sure, especially since you can age/distort the images (ie, take a screenshot of the image, then a photo of that on a camera phone, etc.)

    And at some level, yes, deep fake pictures of Taylor Swift are bad. But are they deep fake pictures of Taylor Swift, or someone who looks a lot like her? Does intent matter? Can you know the intent, given that a lot of this stuff comes from 4Chan and the like? I don’t think anyone believes these pictures are actually Taylor Swift, so there’s no commercial harm. (And I’m absolutely not saying she shouldn’t be upset, just things like a right of publicity or whatever are really dicey.) And much as I hate to say this, there’s something to be said, in the case of fake CSAM material, that at least real children aren’t being hurt. And given the dearth of treatments we have for treating pedophiles, and the stigma they face ever trying to get some sort of help, maybe it’s . . . not as bad as the alternative? (Again, I’m not saying that pedophilia isn’t really evil, and harmful, it’s also in many of cases involuntary. It’s like telling gay people not to be gay.)

    1
  23. Jay L Gischer says:

    Digital signatures, mentioned upthread, will do a lot to demonstrate ownership, which can be used to establish legal liability. But this rests on copyright-style protection of someone’s likeness. I think we’re kinda already there?

    I would say the most promising approach though mirrors music takedown notices on YouTube. There are clearly some recognition/pattern matching algorithms that YouTube runs that can identify small cuts of music, and block publication of a video that contains them. This cannot be done by a human being(s), there’s too many things posted on YouTube every day for that. No, the music recognition stuff is good enough to notice that something is part of a Beatles recording (your own version of a Beatles song will probably not get blocked, at least not right away).

    So image/factial recognition is probably good enough to notice TS’s image and block it, or at least flag it. However, the legal underpinnings aren’t there: For one thing, porn sites don’t do MCA blocking as far as I know. (Maybe they do, what do I know?). Getting them to comply with facial blocking might be a problem, or it might not be.

    And then when the inevitable bootleg stuff appears, it would be really nice to be able to answer, “who made this?”, and the digital signatures will help with that. However, at the moment, I have to ask, “what consequence would there be to the person who made it?” Particularly if they made it, and published it for no apparent profit motive. What are the damages that can be reclaimed? How do you make it so they won’t want to do it again, and even brag about doing it?