ADVERTISEMENT

AI-generated child sex images spawn new nightmare for the web

cigaretteman

HR King
May 29, 2001
77,442
58,937
113
The revolution in artificial intelligence has sparked an explosion of disturbingly lifelike images showing child sexual exploitation, fueling concerns among child-safety investigators that they will undermine efforts to find victims and combat real-world abuse.

Tech is not your friend. We are. Sign up for The Tech Friend newsletter.

Generative-AI tools have set off what one analyst called a “predatory arms race” on pedophile forums because they can create within seconds realistic images of children performing sex acts, commonly known as child pornography.

Thousands of AI-generated child-sex images have been found on forums across the dark web, a layer of the internet visible only with special browsers, with some participants sharing detailed guides for how other pedophiles can make their own creations.

“Children’s images, including the content of known victims, are being repurposed for this really evil output,” said Rebecca Portnoff, the director of data science at Thorn, a nonprofit child-safety group that has seen month-over-month growth of the images’ prevalence since last fall.


“Victim identification is already a needle in a haystack problem, where law enforcement is trying to find a child in harm’s way,” she said. “The ease of using these tools is a significant shift, as well as the realism. It just makes everything more of a challenge.”
Press Enter to skip to end of carousel


The flood of images could confound the central tracking system built to block such material from the web because it is designed only to catch known images of abuse, not detect newly generated ones. It also threatens to overwhelm law enforcement officials who work to identify victimized children and will be forced to spend time determining whether the images are real or fake.

The images have also ignited debate on whether they even violate federal child-protection laws because they often depict children who don’t exist. Justice Department officials who combat child exploitation say such images still are illegal even if the child shown is AI-generated, but they could cite no case in which a suspect had been charged for creating one.


The new AI tools, known as diffusion models, allow anyone to create a convincing image solely by typing in a short description of what they want to see. The models, such as DALL-E, Midjourney and Stable Diffusion, were fed billions of images taken from the internet, many of which showed real children and came from photo sites and personal blogs. They then mimic those visual patterns to create their own images.
The tools have been celebrated for their visual inventiveness and have been used to win fine-arts competitions, illustrate children’s books and spin up fake news-style photographs, as well as to create synthetic pornography of nonexistent characters who look like adults.

But they also have increased the speed and scale with which pedophiles can create new explicit images because the tools require less technical sophistication than past methods, such as superimposing children’s faces onto adult bodies using “deepfakes,” and can rapidly generate many images from a single command.


It’s not always clear from the pedophile forums how the AI-generated images were made. But child-safety experts said many appeared to have relied on open-source tools, such as Stable Diffusion, which can be run in an unrestricted and unpoliced way.
Stability AI, which runs Stable Diffusion, said in a statement that it bans the creation of child sex-abuse images, assists law enforcement investigations into “illegal or malicious” uses and has removed explicit material from its training data, reducing the “ability for bad actors to generate obscene content.”

But anyone can download the tool to their computer and run it however they want, largely evading company rules and oversight. The tool’s open-source license asks users not to use it “to exploit or harm minors in any way,” but its underlying safety features, including a filter for explicit images, is easily bypassed with some lines of code that a user can add to the program.


 
Testers of Stable Diffusion have discussed for months the risk that AI could be used to mimic the faces and bodies of children, according to a Washington Post review of conversations on the chat service Discord. One commenter reported seeing someone use the tool to try to generate fake swimsuit photos of a child actress, calling it “something ugly waiting to happen.”
But the company has defended its open-source approach as important for users’ creative freedom. Stability AI’s chief executive, Emad Mostaque, told the Verge last year that “ultimately, it’s peoples’ responsibility as to whether they are ethical, moral and legal in how they operate this technology,” adding that “the bad stuff that people create … will be a very, very small percentage of the total use.”

Stable Diffusion’s main competitors, Dall-E and Midjourney, ban sexual content and are not provided open source, meaning that their use is limited to company-run channels and all images are recorded and tracked.


OpenAI, the San Francisco research lab behind Dall-E and ChatGPT, employs human monitors to enforce its rules, including a ban against child sexual abuse material, and has removed explicit content from its image generator’s training data so as to minimize its “exposure to these concepts,” a spokesperson said.
“Private companies don’t want to be a party to creating the worst type of content on the internet,” said Kate Klonick, an associate law professor at St. John’s University. “But what scares me the most is the open release of these tools, where you can have individuals or fly-by-night organizations who use them and can just disappear. There’s no simple, coordinated way to take down decentralized bad actors like that.”

On dark-web pedophile forums, users have openly discussed strategies for how to create explicit photos and dodge anti-porn filters, including by using non-English languages they believe are less vulnerable to suppression or detection, child-safety analysts said.


On one forum with 3,000 members, roughly 80 percent of respondents to a recent internal poll said they had used or intended to use AI tools to create child sexual abuse images, said Avi Jager, the head of child safety and human exploitation at ActiveFence, which works with social media and streaming sites to catch malicious content.
Forum members have discussed ways to create AI-generated selfies and build a fake school-age persona in hopes of winning other children’s trust, Jager said. Portnoff, of Thorn, said her group also has seen cases in which real photos of abused children were used to train the AI tool to create new images showing those children in sexual positions.

Yiota Souras, the chief legal officer of the National Center for Missing and Exploited Children, a nonprofit that runs a database that companies use to flag and block child-sex material, said her group has fielded a sharp uptick of reports of AI-generated images within the last few months, as well as reports of people uploading images of child sexual abuse into the AI tools in hopes of generating more.


Though a small fraction of the more than 32 million reports the group received last year, the images’ increasing prevalence and realism threaten to burn up the time and energy of investigators who work to identify victimized children and don’t have the ability to pursue every report, she said. The FBI said in an alert this month that it had seen an increase in reports regarding children whose photos were altered into “sexually-themed images that appear true-to-life.”
“For law enforcement, what do they prioritize?” Souras said. “What do they investigate? Where exactly do these go in the legal system?”

Some legal analysts have argued that the material falls in a legal gray zone because fully AI-generated images do not depict a real child being harmed. In 2002, the Supreme Court struck down two provisions of a 1996 congressional ban on “virtual child pornography,” ruling that its wording was broad enough to potentially criminalize some literary depictions of teenage sexuality.


The ban’s defenders argued at the time that the ruling would make it harder for prosecutors arguing cases involving child sexual abuse because defendants could claim the images didn’t show real children.
In his dissent, Chief Justice William H. Rehnquist wrote, “Congress has a compelling interest in ensuring the ability to enforce prohibitions of actual child pornography, and we should defer to its findings that rapidly advancing technology soon will make it all but impossible to do so.”
Daniel Lyons, a law professor at Boston College, said the ruling probably merits revisiting, given how the technology has advanced in the last two decades.
 
Which AI programs allow this? It’ll be much more difficult with open source stuff in the future, but right now can’t they tell who’s doing this?
 
Moral philosophers be like
Its Happening Ron Paul GIF
 
Which AI programs allow this? It’ll be much more difficult with open source stuff in the future, but right now can’t they tell who’s doing this?

"But anyone can download the tool to their computer and run it however they want, largely evading company rules and oversight."
 
AI is going to take us down a lot of weird rabbit holes. This will be just the tip of the inconceivable icebergs it runs us into.
 
  • Like
Reactions: Moral
ADVERTISEMENT

Latest posts

ADVERTISEMENT