AI Made Friendly HERE

OpenAI’s Sora Underscores the Growing Threat of Deepfakes

When OpenAI released its AI video-generation app, Sora, in September, it promised that “you are in control of your likeness end-to-end.” The app allows users to include themselves and their friends in videos through a feature called “cameos”—the app scans a user’s face and performs a liveness check, providing data to generate a video of the user and to authenticate their consent for friends to use their likeness on the app.

But Reality Defender, a company specializing in identifying deepfakes, says it was able to bypass Sora’s anti-impersonation safeguards within 24 hours. Platforms such as Sora give a “plausible sense of security,” says Reality Defender CEO Ben Colman, despite the fact that “anybody can use completely off-the-shelf tools” to pass authentication as someone else.

Reality Defender’s researchers used publicly available footage of notable individuals, including CEOs and entertainers, from earnings calls and media interviews. The company succeeded in breaching the safeguards with every likeness they attempted to impersonate. Colman argues that “any smart 10th grader” could figure out the tools his company used.

An OpenAI spokesperson said in an emailed statement to TIME that “the researchers built a sophisticated deepfake system of CEOs and entertainers to try to bypass those protections, and we’re continually strengthening Sora to make it more resilient against this kind of misuse.” 

Sora’s release, and the rapid circumvention of its authentication mechanisms, is a reminder that society is unprepared for the next wave of increasingly realistic, personalized deepfakes. The gap between the advancing technology and lagging regulation leaves individuals on their own to navigate an uncertain informational landscape—and to protect themselves from possible fraud and harassment.

“Platforms absolutely know that this is happening, and absolutely know that they could solve it if they wanted to. But until regulations catch up—we’re seeing the same thing across all social media platforms—they’ll do nothing,” says Colman.

Sora hit 1 million downloads in under five days—faster than ChatGPT, which at the time was the fastest-growing consumer app—despite requiring users to have an invite, according to Bill Peebles, OpenAI’s head of Sora. OpenAI’s release followed a similar offering from Meta called Vibes, which is integrated into the Meta AI app.

The increasing accessibility of convincing deepfakes has alarmed some observers. “The truth is that spotting [deepfakes] by eye is becoming nearly impossible, given rapid advances in text-to-image, text-to-video, and audio cloning capabilities,” Jennifer Ewbank, a former deputy director of digital innovation at the CIA, said in an email to TIME.

Regulators have been grappling with how to address deepfakes since at least 2019, when President Trump signed a law requiring the Director of National Intelligence to investigate the use of deepfakes by foreign governments. However, as the accessibility of deepfakes has increased, the focus of legislation has moved closer to home. In May 2025, the Take It Down Act was signed into federal law, prohibiting the online publication of “intimate visual depictions” of minors and of nonconsenting adults, and requiring platforms to take down offending content within 48 hours of a request—but enforcement will only begin in May 2026.

Legislation prohibiting deepfakes can be fraught. “It’s actually really complicated, technically and legally, because there are First Amendment concerns about taking down certain speech,” says Jameson Spivack, deputy director for U.S. policy at the Future of Privacy Forum. In August, a federal judge struck down a California deepfake law, which aimed to restrict AI-generated deepfake content during elections, after Elon Musk’s X sued the state on the basis that the law violated First Amendment protections. As a result, requirements to label AI-generated content are more common than outright bans, says Spivack.

Another promising approach is for platforms to adopt better know-your-customer schemes, says Fred Heiding, a research fellow at Harvard University’s Defense, Emerging Technology, and Strategy Program. Know-your-customer schemes require users of platforms such as Sora to sign in using verified identification, increasing accountability and allowing authorities to trace illegal behavior. But there are trade-offs here, too. “The problem is we really value anonymity in the West,” says Heiding. “That’s good, but anonymity has a cost, and the cost is these things are really difficult to enforce.”

While legislators grapple with the increasing prevalence and realism of deepfakes, individuals and organizations can take steps to protect themselves. Spivack recommends the use of authentication software such as Content Credentials, developed by the Coalition for Content Provenance and Authenticity, which appends metadata about provenance to images and videos. Cameras from Canon and Sony support the watermark, as does the Google Pixel 10. Using such authentication increases trust in genuine images, and undermines fakes.

As the online information landscape changes, making it harder to trust the things we see and hear online, lawmakers and individuals alike must build society’s resilience to fake media. “The more we cultivate that resilience, the harder it becomes for anyone to monopolize our attention and manipulate our trust,” says Ewbank.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird