
Runway AI Inc. today announced that it has closed a $308 million funding round led by General Atlantic.
The growth equity firm was joined by several other investors including Fidelity, Baillie Gifford, SoftBank Group Corp. and Nvidia Corp. The chipmaker also backed Runway’s previous $141 million funding round in 2023. Bloomberg reported that the artificial intelligence startup is now worth more than $3 billion.
Runway’s latest raise is not unexpected. Word of the Series D investment first emerged last July, when The Information reported that Runway was in talks with General Atlantic about new funding.
The investment comes a day after Runway debuted Gen-4, its newest video generation model. The algorithm allows users to create clips up to 10 seconds in length by providing a reference image and natural language instructions. It doubles as an image generation tool.
Compared with Runway’s previous video generator, Gen-4 is significantly better at keeping the look of objects consistent across a video’s frames. It can maintain consistency even if the object’s background changes.
Runway stated today that the new funding will support its AI development efforts. A job opening on the company’s website hints that the engineering push will focus on enhancing its AI training datasets. According to the posting, Runway is hiring for a machine learning director who can “establish and oversee data partnerships to obtain high-quality datasets for our AI models.”
In addition to sourcing training data from external partners, the company may be planning to create datasets in-house. Runway currently has openings for a screenwriter, a visual effects artist and an animator. An in-house creative team would enable the company to create custom video datasets for its AI training projects.
Another job posting, for an engineering manager, hints that Runway’s development roadmap will prioritize diffusion models and large language models. Neural networks of the former variety are the go-to choice for video generation tasks. They generate clips by creating a video that contains noise and then gradually replacing the noise with the visuals requested by the user.
LLMs, the other apparent focus of Runway’s development roadmap, can’t generate videos. However, the Transformer architecture on which most LLMs are based can be used to enhance diffusion models. Replacing some of a diffusion model’s components with a Transformer module speeds up training in some cases.
Today’s funding round should put Runway in a better position to compete with OpenAI, which offers a competing video generator called Sora. The latter model can be used to generate clips up to 20 seconds in length. Earlier this week, OpenAI temporarily disabled Sora’s video generation capability for new users because it’s “experiencing heavy traffic.”
Photo: Runway
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy
THANK YOU