The rapid advancement and accessibility of artificial intelligence (AI) has unfortunately opened the door to a deeply disturbing trend: the creation of realistic images depicting children in sexual situations. Experts warn that this alarming development could lead to a surge in real-world sex crimes against children.
The rise of AI platforms capable of generating human-like text and realistic images gained significant traction in late 2022 and into 2023, particularly after the launch of ChatGPT. While many have explored these technologies for legitimate purposes, some individuals have exploited them for harmful activities.
The National Crime Agency (NCA) in the U.K. recently issued a warning about the escalating proliferation of AI-generated child sexual abuse material. They express concern that these fabricated images are contributing to the normalization of pedophilia and harmful behaviors towards children.

NCA Director General Graeme Biggar stated that viewing such imagery, whether real or AI-generated, significantly increases the risk of individuals engaging in child sexual abuse. The NCA estimates that up to 830,000 adults in the U.K. pose a sexual threat to children – a figure ten times larger than the U.K.'s prison population.
Biggar highlighted that a majority of child sexual abuse cases involve viewing explicit material. The ease with which AI can now be used to create and access such content further normalizes abusive behavior.

This disturbing trend is not limited to the U.K. In the United States, a similar surge in the use of AI to generate child sexual abuse imagery is occurring. Rebecca Portnoff, director of data science at Thorn, a non-profit dedicated to protecting children, emphasized the challenge this poses for victim identification and law enforcement.

While mainstream AI platforms have community guidelines prohibiting the creation of harmful content, malicious actors are finding ways to circumvent these restrictions. Biggar stressed the added burden this places on law enforcement, who must now differentiate between real victims and AI-generated images.

The FBI has also issued warnings about the use of AI-generated images in sextortion scams. Deepfakes, created using deep-learning AI, are increasingly being used to manipulate and exploit individuals, including children. These fabricated images are then circulated online to harass victims or extort money.