OpenAI has finally launched its much-hyped Sora video generator. The release is part of OpenAI’s “12 Days of Shipmas,” during which the company has been releasing a string of new products including the $200-per-month ChatGPT Pro tier. Marques Brownlee was first to confirm today’s release, followed by a presentation from OpenAI later in the day.
Sora is included in ChatGPT paid memberships at sora.com, meaning you don’t have to pay extra for it, though there are limits. ChatGPT Plus subscribers can generate up to fifty twenty-second videos per month at a resolution of 480p, while Pro subscribers can generate unlimited videos at a slower speed, or five hundred at the fastest speed. Get ready for a lot of AI slop in your feeds.
OpenAI says Sora is available starting today around the world except in the UK and European Union, presumably due to regulatory issues. CEO Sam Altman said in today’s announcement that OpenAI doesn’t know when it will reach those regions.
“We don’t want our AIs to just be text,” Altman said. “Crucial to our AGI roadmap, AI will learn a lot about how we do things in the world” by training on video data.
OpenAI’s Sora includes a website where users can share their videos with the community.
OpenAI started its presentation today showing off an explore page where “people can come together” and share videos they’ve created with Sora, including the prompts they used to generate them. Users can save videos shared by others and use them in their own projects.
Sora was first announced back in February, and OpenAI has slowly been rolling out the model with preview testers. Former CTO Mira Murati famously was the subject of ridicule online after she told the Wall Street Journal that she was unsure whether Sora was trained on YouTube videos, which would be a violation of Google’s terms of service.
If this is the current state of Sora, I’m starting to see a $200/mo price tag being justified.
1 min outputs
Text to video
Image to video
Video to Video
Info and details below: pic.twitter.com/mfYnkqjEa7
— Theoretically Media (@TheoMediaAI) December 8, 2024
Either way, early previews of Sora appear disconcertingly realistic. Brownlee shared an AI-generated news clip made using Sora that’s intended to look like a local news broadcast. There still appear to be telltale signs that the video is fake—the text in the video is jumbled and incoherent, and resolution seems to drop at times. But it’s hard to deny that the video looks very close to the real thing. That should cause some concern considering older individuals on Facebook already seem to suspend their disbelief and engage with AI-generated slop. And CEO Mark Zuckerberg wants to see more of it in feeds, not less. At what point will people become completely disconnected from reality?
Videos created with Sora can be customized through additional text prompts as part of its “remix” tool—OpenAI showed a video of woolly mammoths running through the desert and used the remix tool to turn them into robots. A storyboard lets users string together several text prompts that Sora will attempt to blend into cohesive scenes. It looks a lot like a standard video editing app with a timeline and clips that can be moved around.
OpenAI’s Sora timeline feature looks like a traditional video editor.
The rumors are true – SORA, OpenAI’s AI video generator, is launching for the public today…
I’ve been using it for about a week now, and have reviewed it:
THE BELOW VIDEO IS 100% AI GENERATED
I’ve learned a lot testing this, here are some new… pic.twitter.com/uA1EhRuK7B
— Marques Brownlee (@MKBHD) December 9, 2024
One notable issue with Sora is that it’s hard to precisely control the output of AI models. That should be of some comfort to creatives. Sora could drive down the cost of production where visual effects are concerned, but artists will want to have control over every detail, and it seems like Sora’s controls are crude at this point. The demos we’ve seen thus far potentially have been edited quite a bit, and hallucinations remain a problem. Brownlee says that Sora struggles to generate realistic physics, often showing objects simply disappear or pass through each other. It also doesn’t know how fast objects, like soccer balls, should move. But then again, traditional filmmaking requires a lot of post-production editing as well.
It will be interesting to see how films made with Sora will compare in feeling to something like a Tom Cruise movie where very little visual effects are used and he does all of his own stunts. There is a lot of bad CGI out there; hopefully, Sora doesn’t make that worse. Use of others’ creative material scraped from the web also remains a concern. OpenAI and other players in the AI space like Perplexity have pushed back hard against accusations that their models are trained on stolen data, essentially arguing anything publicly accessible on the web is fair game. Companies like Reddit and the New York Times disagree, however, and have taken steps to stop their content from being used in AI models.
Because OpenAI does receive so much scrutiny, the company says that it will initially be more heavy-handed with moderation, and place limits on generating certain types of content including videos based on real people.
Concluding today’s presentation, the company’s head of product for Sora said that users shouldn’t expect to be able to make a feature length film with the tool—at least not initially. “Sora is a tool, it allows you to be in multiple places at once, try different ideas, it’s an extension of the creator who is behind it.”