The revolution in artificial intelligence has sparked an explosion of disturbingly lifelike images showing child sexual exploitation, fueling concerns among child-safety investigators that they will undermine efforts to find victims and combat real-world abuse.
Tech is not your friend. We are. Sign up for The Tech Friend newsletter.
Generative-AI tools have set off what one analyst called a “predatory arms race” on pedophile forums because they can create within seconds realistic images of children performing sex acts, commonly known as child pornography.
Thousands of AI-generated child-sex images have been found on forums across the dark web, a layer of the internet visible only with special browsers, with some participants sharing detailed guides for how other pedophiles can make their own creations.
“Children’s images, including the content of known victims, are being repurposed for this really evil output,” said Rebecca Portnoff, the director of data science at Thorn, a nonprofit child-safety group that has seen month-over-month growth of the images’ prevalence since last fall.
“Victim identification is already a needle in a haystack problem, where law enforcement is trying to find a child in harm’s way,” she said. “The ease of using these tools is a significant shift, as well as the realism. It just makes everything more of a challenge.”
Press Enter to skip to end of carousel
The flood of images could confound the central tracking system built to block such material from the web because it is designed only to catch known images of abuse, not detect newly generated ones. It also threatens to overwhelm law enforcement officials who work to identify victimized children and will be forced to spend time determining whether the images are real or fake.
The images have also ignited debate on whether they even violate federal child-protection laws because they often depict children who don’t exist. Justice Department officials who combat child exploitation say such images still are illegal even if the child shown is AI-generated, but they could cite no case in which a suspect had been charged for creating one.
The new AI tools, known as diffusion models, allow anyone to create a convincing image solely by typing in a short description of what they want to see. The models, such as DALL-E, Midjourney and Stable Diffusion, were fed billions of images taken from the internet, many of which showed real children and came from photo sites and personal blogs. They then mimic those visual patterns to create their own images.
The tools have been celebrated for their visual inventiveness and have been used to win fine-arts competitions, illustrate children’s books and spin up fake news-style photographs, as well as to create synthetic pornography of nonexistent characters who look like adults.
But they also have increased the speed and scale with which pedophiles can create new explicit images because the tools require less technical sophistication than past methods, such as superimposing children’s faces onto adult bodies using “deepfakes,” and can rapidly generate many images from a single command.
It’s not always clear from the pedophile forums how the AI-generated images were made. But child-safety experts said many appeared to have relied on open-source tools, such as Stable Diffusion, which can be run in an unrestricted and unpoliced way.
Stability AI, which runs Stable Diffusion, said in a statement that it bans the creation of child sex-abuse images, assists law enforcement investigations into “illegal or malicious” uses and has removed explicit material from its training data, reducing the “ability for bad actors to generate obscene content.”
But anyone can download the tool to their computer and run it however they want, largely evading company rules and oversight. The tool’s open-source license asks users not to use it “to exploit or harm minors in any way,” but its underlying safety features, including a filter for explicit images, is easily bypassed with some lines of code that a user can add to the program.
Tech is not your friend. We are. Sign up for The Tech Friend newsletter.
Generative-AI tools have set off what one analyst called a “predatory arms race” on pedophile forums because they can create within seconds realistic images of children performing sex acts, commonly known as child pornography.
Thousands of AI-generated child-sex images have been found on forums across the dark web, a layer of the internet visible only with special browsers, with some participants sharing detailed guides for how other pedophiles can make their own creations.
“Children’s images, including the content of known victims, are being repurposed for this really evil output,” said Rebecca Portnoff, the director of data science at Thorn, a nonprofit child-safety group that has seen month-over-month growth of the images’ prevalence since last fall.
“Victim identification is already a needle in a haystack problem, where law enforcement is trying to find a child in harm’s way,” she said. “The ease of using these tools is a significant shift, as well as the realism. It just makes everything more of a challenge.”
Press Enter to skip to end of carousel
The flood of images could confound the central tracking system built to block such material from the web because it is designed only to catch known images of abuse, not detect newly generated ones. It also threatens to overwhelm law enforcement officials who work to identify victimized children and will be forced to spend time determining whether the images are real or fake.
The images have also ignited debate on whether they even violate federal child-protection laws because they often depict children who don’t exist. Justice Department officials who combat child exploitation say such images still are illegal even if the child shown is AI-generated, but they could cite no case in which a suspect had been charged for creating one.
The new AI tools, known as diffusion models, allow anyone to create a convincing image solely by typing in a short description of what they want to see. The models, such as DALL-E, Midjourney and Stable Diffusion, were fed billions of images taken from the internet, many of which showed real children and came from photo sites and personal blogs. They then mimic those visual patterns to create their own images.
The tools have been celebrated for their visual inventiveness and have been used to win fine-arts competitions, illustrate children’s books and spin up fake news-style photographs, as well as to create synthetic pornography of nonexistent characters who look like adults.
But they also have increased the speed and scale with which pedophiles can create new explicit images because the tools require less technical sophistication than past methods, such as superimposing children’s faces onto adult bodies using “deepfakes,” and can rapidly generate many images from a single command.
It’s not always clear from the pedophile forums how the AI-generated images were made. But child-safety experts said many appeared to have relied on open-source tools, such as Stable Diffusion, which can be run in an unrestricted and unpoliced way.
Stability AI, which runs Stable Diffusion, said in a statement that it bans the creation of child sex-abuse images, assists law enforcement investigations into “illegal or malicious” uses and has removed explicit material from its training data, reducing the “ability for bad actors to generate obscene content.”
But anyone can download the tool to their computer and run it however they want, largely evading company rules and oversight. The tool’s open-source license asks users not to use it “to exploit or harm minors in any way,” but its underlying safety features, including a filter for explicit images, is easily bypassed with some lines of code that a user can add to the program.