AI Made Friendly HERE

How Sora 2 is Being Used to Create Nightmare Imagery of Kids?

A new video generation tool developed by OpenAI is mired in controversy, with a viral campaign by child safety advocates sounding alarm bells across the internet.

Sora 2, launched in late 2025 as a cutting-edge AI video creator, has become an avenue to create hyper-realistic videos of AI-generated kids in sexualized and highly disturbing scenarios. 

Despite the stern policies of the company against child exploitation and child sexual abuse material, bad actors are finding ways to bypass safeguards and spread such content across social media platforms like TikTok.

The problem’s scale became clear as watchdog organizations began documenting what they found. Researchers for Ekō, a digital advocacy group, ran an undercover investigation that exposed shocking vulnerabilities in Sora 2’s safety systems. 

Posing as teenagers aged 13 to 14, they created 22 videos violating OpenAI’s own policies on forbidden content, including footage of youth using drugs, committing self-harm, posing in sexualized ways, creating simulated school shootings, and perpetuating racist stereotypes.

Sora and Other Tools Fuel Disturbing Rise in Fake Videos Targeting Minors

The sort of videos being created is quite frightening. There are fake advertisements for toys that are copied from real advertisements but include kids using real products such as sex toys, including vibrators. 

There are also play sets created from parodies which include convicted sex offender Jeffrey Epstein, using child faces that are fake in an attempt to comply with regulations to not include real images of minors.

However, it is not isolated cases. The Internet Watch Foundation, which is involved in tracking child sexual abuse images on the Internet, noticed a substantial rise in the use of abuse images generated with the help of artificial intelligence in the year 2025. 

The images remove watermarks and other identifying elements before spreading.

Credits: Mashable

What’s more disturbing about all of this is the way and means by which the offensive material reaches the users. 

According to Ekō’s findings, algorithms on a social platform promote antisemitic caricatures, violent animated sequences, and degrading stereotypes to a new teenager’s account within a few minutes of the account’s opening.

Why Sora 2’s Latest Feature is a Safety Minefield?

One of the most contentious features of Sora 2 is named “Cameo,” and it enables users to upload faces and voices to create videos for themselves. 

Though such technology does have good uses in creative production, it paves the way for very harmful uses such as cyberbullying and black-mailing or non-consensual deep fakes. 

Then there’s also the issue of the data being stored to train future artificial intelligence models for privacy breaches.

What is most disconcerting is how easy it is to circumvent the safety measures in place. Despite OpenAI having multiple safety measures in place such as prompt filtering, content verification on a frame-by-frame basis, audio verification, and specific rules for child-related content, it only took text prompts and/or account manipulation to circumvent them.

One organization that evaluates children’s media content is Common Sense Media. This media non-profit organization has declared game Sora 2 as “unacceptable” content for children. 

The reasons provided by this organization include inadequate parental controls, the lack of cautioning about offensive content, or accessibly disturbing content.

What OpenAI Says It’s Doing

To be specific, it is already established that OpenAI adheres to rules that prohibit the generation of sexual content, violence, self-harm, and deepfakes. OpenAI includes C2PA metadata in their videos, adds watermarks, and also uses classifiers that determine whether children appear in the content, which is expected to require more stringent filtering.

It also has youth protections and must obtain consent for the Cameo feature. When incidents of content violating the terms of service occur, platforms like TikTok take it down. 

However, some critics say these actions are inadequate considering that the technology makes it easy to produce content that can be harmful.

Rethinking AI Moderation in the Age of Generative Video

It represents a challenge in the area of AI, as the gap between the abilities of the technology and the protection offered by safeguards becomes a problem. The abilities of video generation technologies are updating, leading to an exponential expansion in the potential misuse of the technology.

Safety advocates are now demanding design improvements that, in addition to moderation after the fact, are significantly more difficult to use to produce harmful content, stronger age-verification measures, and a greater degree of transparency about just how these are being used, along with other methods that are likely being employed.

For now, it seems that it’s up to parents, teachers, and platforms to play catch-up on technology that seems to be moving at an incredibly fast pace compared to how quickly safeguards ought to be kept up. The situation with Sora 2 should be an eye-opening reminder to all of us: innovation without adequate infrastructure to protect it from harm causes very real damage, to those who are most at risk.

 

Originally Appeared Here

You May Also Like

About the Author:

Early Bird