OpenAI has published two new Sora videos to its YouTube channel, each made by a professional filmmaker and entirely using video generated by the Sora model.
Sora was first revealed earlier this year and is still only available to a select handful of filmmakers and creative professionals after months of waiting. OpenAI says that is unlikely to change anytime soon.
In one of the new videos, we see a black-and-white, old-style documentary film featuring animals in unexpected roles. The other can only be described as a neon dreamscape, featuring car washes, walking on clouds and glowing coats.
These clips show that Sora is still better than the current models, even after the massive upgrades to Runway Gen-3 and the release of Luma Dream Machine but not by much. There are some good alternatives to Sora already available.
Longer initial generations mean improved consistency and the level of natural motion suggests Sora is almost an open-world model. But the others are quickly catching up and may hit Sora level before Sora is available to the public.
A neon dreamscape
Tammy Lovin · Sora Showcase – YouTube
Watch On
Tammy Lovin created the first of the videos. She is a digital artist specializing in 3D and emerging technology — rather than filmmaking. For the Sora video all clips were generated by Sora without any additional VFX.
“What I love most about Sora is that I feel like I am co-creating with it,” she explained in a statement shared under the video. “It feels like teamwork, in the smoothest and most idealistic way possible.”
In the video we jump from a neon carwash with waves sweeping you away to a scene with a man walking through the clouds and on to a woman lighting up the beach.
Lovin said Sora triggered a new creative process, explaining that showing in video form ideas previously only in her imagination “feels like magic.”
“Ever since I was a kid, I kind of had these montages and surreal visuals about certain things I was seeing in reality, and I’d picture them differently. But since I didn’t become a producer or director, they never really came to life until now. So this is a dream come true.”
Animals in strange places
Ben Desai · Sora Showcase – YouTube
Watch On
As with the last video, this next production was from someone not traditionally a filmmaker. Benjamin Desai is a creative technologist and digital artist, whose main focus is on augmented reality and immersive content.
He said in a statement that he is “excited to share this imaginative look into an alternate past powered by Sora.” In the video Desai blends “early 20th-century film aesthetics with whimsical scenarios and placing animals in unexpected roles.”
The video opens with a bear cycling and a gorilla riding a skateboard. As it progresses we see a dancing panda, a man riding a dinosaur and a woman on a giant turtle. It is disturbing as much as it is impressive.
“This work aims to ignite a sense of wonder while showcasing the potential of today’s technology,” explained Desai. “Creating with Sora is still an experimental process, involving a lot of iteration and fine-tuning. It’s much more of a human-AI collaboration than a magic button solution.”
When will we see Sora publicly?
OpenAI has stopped offering suggestions for when Sora might be publicly available, instead talking about what they’re doing to be able to release it.
Earlier this year CTO Mira Murati suggested it might be out this summer but that is not likely at this point. If it comes out this year for general release it will be after the U.S. Presidential Election in November, possibly tied to a major ChatGPT update.
The company says they are currently rolling it out to a wider group of professionals than just filmmakers. This includes VFX experts, architects, choreographers, engineering artists and other creatives.
This is to “help us understand the model’s capabilities and limitations, shaping the next phase of research to create increasingly safe AI systems over time,” a spokesperson explained.
Final thoughts
While the videos are impressive, and continue to showcase the power of the Sora model — other tools like Luma Labs Dream Machine, Runway Gen-3 and the Chinese Kling AI model are all offering similar rendering quality.
While Sora does still seem to have motion down more accurately than any of the other models, its only a matter of time before that is cracked as well, so the question remains — why is OpenAI being so cautious?
More from Tom’s Guide
Back to MacBook Air
SORT BYPrice (low to high)Price (high to low)Product Name (A to Z)Product Name (Z to A)Retailer name (A to Z)Retailer name (Z to A)