Gone are the days when a “fake” on the internet was easy to spot, often just a badly Photoshopped picture. Now, we’re swimming in a sea of AI-generated videos and deepfakes, from bogus celebrity endorsements to false disaster broadcasts. The latest technology has become uncomfortably clever at blurring the lines between reality and fiction, making it almost impossible to discern what’s real.
And the situation is rapidly escalating. OpenAI’s Sora is already muddying the waters, but now its viral “social media app,” Sora 2, is the internet’s hottest — and most deceptive — ticket. It’s a TikTok-style feed where everything is 100% fake. This author has called it a “deepfake fever dream” and for good reason. The platform is continually improving when it comes to making fiction look realistic, with significant real-world risks.
If you’re struggling to separate the real from the AI, you’re not alone. Here are some helpful tips that should help you cut through the noise to get to the truth of each AI-inspired creation.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
My AI expert take on Sora videos
From a technical standpoint, Sora videos are impressive compared to competitors such as Midjourney V1 and Google Veo 3. They have high resolution, synchronized audio and surprising creativity. Sora’s most popular feature, dubbed “cameo,” lets you use other people’s likenesses and insert them into nearly any AI-generated scene. It’s an impressive tool, resulting in scarily realistic videos.
That’s why so many experts are concerned about Sora. The app makes it easier for anyone to create dangerous deepfakes, spread misinformation and blur the line between what’s real and what’s not. Public figures and celebrities are especially vulnerable to these deepfakes, and unions like SAG-AFTRA have pushed OpenAI to strengthen its guardrails.
Identifying AI content is an ongoing challenge for tech companies, social media platforms and everyone else. But it’s not totally hopeless. Here are some things to look out for to determine whether a video was made using Sora.
Look for the Sora watermark
Every video made on the Sora iOS app includes a watermark when you download it. It’s the white Sora logo — a cloud icon — that bounces around the edges of the video. It’s similar to the way TikTok videos are watermarked. Watermarking content is one of the biggest ways AI companies can visually help us spot AI-generated content. Google’s Gemini “nano banana” model automatically watermarks its images. Watermarks are great because they serve as a clear sign that the content was made with the help of AI.
But watermarks are not perfect. For one, if the watermark is static (not moving), it can easily be cropped out. Even for moving watermarks such as Sora’s, there are apps designed specifically to remove them, so watermarks alone can’t be fully trusted. When OpenAI CEO Sam Altman was asked about this, he said society will have to adapt to a world where anyone can create fake videos of anyone. Of course, prior to Sora, there was no popular, easily accessible, no-skill-needed way to make those videos. But his argument raises a valid point about the need to rely on other methods to verify authenticity.
Check the metadata
I know you’re probably thinking that there’s no way you’re going to check a video’s metadata to determine if it’s real. I understand where you’re coming from. It’s an extra step, and you might not know where to start. But it’s a great way to determine if a video was made with Sora, and it’s easier to do than you think.
Metadata is a collection of information automatically attached to a piece of content when it’s created. It gives you more insight into how an image or video was created. It can include the type of camera used to take a photo, the location, date and time a video was captured and the filename. Every photo and video has metadata, no matter whether it was human- or AI-created. And a lot of AI-created content will have content credentials that denote its AI origins, too.
OpenAI is part of the Coalition for Content Provenance and Authenticity, which means Sora videos include C2PA metadata. You can use the verification tool from the Content Authenticity Initiative to check a video, image or document’s metadata. Here’s how. (The Content Authenticity Initiative is part of C2PA.)
How to check a photo, video or document’s metadata
1. Navigate to this URL: https://verify.contentauthenticity.org/
2. Upload the file you want to check.
3. Click Open.
4. Check the information in the right-side panel. If it’s AI-generated, it should include that in the content summary section.
When you run a Sora video through this tool, it’ll say the video was “issued by OpenAI,” and will include the fact that it’s AI-generated. All Sora videos should contain these credentials that allow you to confirm that it was created with Sora.
This tool, like all AI detectors, isn’t perfect. There are a lot of ways AI videos can avoid detection. If you have non-Sora videos, they may not contain the necessary signals in the metadata for the tool to determine whether or not they’re AI-created. AI videos made with Midjourney, for example, don’t get flagged, as I confirmed in my testing. Even if the video was created by Sora, but then run through a third-party app (like a watermark removal one) and redownloaded, that makes it less likely the tool will flag it as AI.
The Content Authenticity Initiative’s verify tool correctly flagged that a video I made with Sora was AI-generated, along with the date and time I created it.
Look for other AI labels and include your own
If you’re on one of the social media platforms from Meta, like Instagram or Facebook, you may get a little help determining whether something is AI. Meta has internal systems in place to help flag AI content and label it as such. These systems are not perfect, but you can clearly see the label for posts that have been flagged. TikTok and YouTube have similar policies for labeling AI content.
The only truly reliable way to know if something is AI-generated is if the creator discloses it. Many social media platforms now offer settings that let users label their posts as AI-generated. Even a simple credit or disclosure in your caption can go a long way to help everyone understand how something was created.
You know while you scroll Sora that nothing is real. However, once you leave the app and share AI-generated videos, it becomes our collective responsibility to disclose how a video was created. As AI models like Sora continue to blur the line between reality and AI, it’s up to all of us to make it as clear as possible when something is real or AI.
Most importantly, remain vigilant
There is no one foolproof method to accurately tell from a single glance if a video is real or AI. The best thing you can do to prevent yourself from being duped is to not automatically, unquestioningly believe everything you see online. Follow your gut instinct, and if something feels unreal, it probably is. In these unprecedented, AI-slop-filled times, your best defense is to inspect the videos you’re watching more closely. Don’t just quickly glance and scroll away without thinking. Check for mangled text, disappearing objects and physics-defying motions. And don’t beat yourself up if you get fooled occasionally. Even experts get it wrong.
(Disclosure: Ziff Davis, parent company of CNET, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
