Runway, one of the first AI video generation platforms to launch publicly, has unveiled the third generation of its model — and its a huge step forward for the technology and could be one of the best AI video generators yet.
In the same way that OpenAI says its end goal is artificial general intelligence, for Runway that is general world models. That is an AI system that can build an internal representation of an environment and use it to simulate events inside that environment.
Gen-3 Alpha, the new model from Runway is the closest the startup has come to achieving that long-term ambition. The company says it will power all image- and text-to-video tools on the Runway platform, as well as Motion Brush and other features such as text-to-image.
Runway: How does Gen-3 differ from Gen-2
Introducing Gen-3 Alpha: Runway’s new base model for video generation.Gen-3 Alpha can create highly detailed videos with complex scene changes, a wide range of cinematic choices, and detailed art directions. pic.twitter.com/VjEG2ocLZ8June 17, 2024
Runway hasn’t said when Gen-3 will be implemented, replacing the current Gen-2 models but added there are new safeguards in place for Gen-3 including improved visual moderation and the C2PA standard which makes it easier to trace the origin of different types of media.
This is the latest in a new generation of AI video models, each with longer clips and improved motion including OpenAI Sora, Luma Labs Dream Machine and Kling.
Runway says Gen-3 is the first in a series of models that have been trained on a new infrastructure. This was built specifically for large-scale multimodal training and improves fidelity, consistency and motion.
One of the lessons learned from Sora is that scale matters above most other things, so adding more compute and data can significantly improve the model.
What does Gen-3 look like?
This leap forward in technology represents a significant milestone in our commitment to empowering artists, paving the way for the next generation of creative and artistic innovation.Gen-3 Alpha will be available for everyone over the coming days.Prompt: A slow cinematic push… pic.twitter.com/cLaZvGpeu6June 17, 2024
The new model was trained on video and image at the same time, which Runway says will improve visual quality from tex-to-video prompts.
The new model will also power new tools offering more fine-grained control over things like structure, style and motion.
I haven’t had the chance to try Gen-3 myself and it is still in alpha mode but the videos seem to show a significant improvement in motion and prompt adherence.
Each video is about ten seconds long which is about twice as long as a Luma default and of a similar length to Sora videos. It is also nearly three times the length of the current Runway Gen-2 videos.
1. Taking the train
(Image credit: Runway Gen-3)
Prompt: “Subtle reflections of a woman on the window of a train moving at hyper-speed in a Japanese city.”
2. Spaceman in the city
(Image credit: Runway Gen-3)
Prompt: “An astronaut running through an alley in Rio de Janeiro.”
3. An underwater community
(Image credit: Runway Gen-3)
Prompt: “FPV flying through a colorful coral lined streets of an underwater suburban neighborhood.”
4. Hot Air balloon
(Image credit: Runway Gen-3)
Prompt: “Handheld tracking shot at night, following a dirty blue ballon floating above the ground in abandon old European street.”
5. The big picture
(Image credit: Runway Gen-3)
Prompt: “An extreme close-up shot of an ant emerging from its nest. The camera pulls back revealing a neighborhood beyond the hill.”
6. Realistic people
(Image credit: Runway Gen-3)
Prompt: “Zoom in shot to the face of a young woman sitting on a bench in the middle of an empty school gym.”
7. Drone through a castle
(Image credit: Runway Gen-3)
Prompt: “A FPV drone shot through a castle on a cliff.”
More from Tom’s Guide
Back to MacBook Air
SORT BYPrice (low to high)Price (high to low)Product Name (A to Z)Product Name (Z to A)Retailer name (A to Z)Retailer name (Z to A)