AI Made Friendly HERE

AI puts real child sex victims at risk, IWF experts say

Helen Burchell & Lydia Dowling Ranera

BBC News, Cambridgeshire

BBC A composite image shows the silhouette of a boy and in the foreground is a man's hand, typing on a keyboardBBC

AI-generated images of children are on the increase and are causing concern, experts say

An increase in sophisticated AI-generated images of child abuse could result in police and other agencies chasing “fake” rather than genuine abuse, a charity has said.

The Internet Watch Foundation (IWF), based in Histon, near Cambridge, finds, flags, and removes images and videos of child sexual abuse from the web.

However, the ever-increasing use of AI images – a 300% increase in 2024 compared to the previous year – has added another layer of complexity to their work.

Dan Sexton, IWF chief technology officer, said there was now a risk that law enforcement and other agencies could be “trying to rescue children that don’t exist or not trying to rescue children because they think they’re AI”.

“About two years ago we first started seeing this content being circulated, and there were little ‘tells’ – it looked distinctly different,” Mr Sexton said.

But developments in the technology meant that at times it had become indistinguishable.

“There will be imagery in there that is so realistic or so similar to the content we see, you cannot tell the difference,” Mr Sexton added.

“What creates a lot of concern is the ability of our people in law enforcement and others – those people who are out there trying to rescue children at risk – knowing whether a child is a real child and needs to be rescued and is at current harm, or they are AI-generated and they don’t exist.

“The risk there, is, we and policing end up trying to rescue children that don’t exist or not trying to rescue children because they think they’re AI.”

IWF Dan Sexton is standing in front of a company logo. He is looking at the camera and is wearing a white shirt and a burgundy tie. He has short, dark hairIWF

Dan Sexton said he was concerned chasing AI-generated children could put real victims at risk

Mr Sexton said the IWF looked for trends about where the content was being generated and distributed.

“But when it starts to get shared with real content as well, there’s every chance we won’t know the AI-generated content is out there,” he said.

He expressed concerns about “the ability to safeguard children, and the risk that there would be children who won’t get safeguarded because we’re too busy dealing with synthetic children”.

The foundation is also looking at the use of AI to detect AI.

“The scale of the problem – and the potential increase in the scale… means it’s never been more important to have AI tools.. to help us.”

Last year the IWF recorded a 300% increase in AI-generated content, compared to the previous year.

Mr Sexton added: “I’d like to one day be able to show a report that says there’s less [child sexual abuse imagery] but unfortunately that’s not the case – it’s not happened so far.”

Lydia Dowling Ranera/BBC A woman is pictured looking out of a window. You can only see the back of her head. She has short, dark hair and is wearing a beige jumper and has a patterned purple scarf around her neckLydia Dowling Ranera/BBC

Natalia said early signs of AI were simpler to detect as limbs, digits and clothing texture were often “giveaways”

Natalia (not her real name) has been an analyst at the IWF for almost five years and said AI was one of her specialist areas.

She said it was becoming “more and more difficult” to tell the difference between AI and a real child.

“The content has become so realistic and also the speed at which this technology is developing is really alarming,” she said.

“The IWF saw its first AI images in 2023 but in 2024 the number of reports quadrupled.”

She also expressed concern that police, for example, may be “sent chasing a non-existent child”.

“If we think a child is in danger now, we will make a referral [to the police] and we really don’t want to make a referral about an AI-generated child.”

Natalia illustrated her AI concerns with the story of a victim of child abuse whose images had been circulating since 2011.

Although her abuser was caught, and the girl eventually “went public” with her story, the images had been shared widely.

“Now we are seeing new images of her – images generated by AI – some of them are even more severe than the images that were actually taken in reality,” Natalia said.

“This is as far from a victimless crime as it gets – there’s a very real victim here and I think real harm is being done by this content.”

IWF Internet Watch Foundation's offices are a three-storey glass building with bushes and trees around the, and a car parking area to the leftIWF

The Internet Watch Foundation’s offices are in a village near Cambridge

A spokesperson from the National Crime Agency said: “Generative AI image creation tools will increase the volume of child sexual abuse material available across the clear web and dark web, creating difficulties with identifying and safeguarding victims due to vastly improved photo realism.

“However, we are working closely with partners to tackle this threat, and are continuing to invest in technology to assist us with CSA (child sexual abuse) investigations to safeguard children.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird