AI Made Friendly HERE

Sora needs to up its game to match the new Runway AI video model

I always enjoy a chance to mess with AI video generators. Even when they’re terrible, they can be entertaining, and when they pull it off, they can be amazing. So, I was keen to play with Runway’s new Gen-4 model.

The company boasted that the Gen-4 (and its smaller, faster sibling model, Gen-4 Turbo) can outperform the earlier Gen-3 model in quality and consistency. Gen-4 supposedly nails the idea that characters can and should look like themselves between scenes, along with more fluid motion and improved environmental physics.

It’s also supposed to be remarkably good at following directions. You give it a visual reference and some descriptive text, and it produces a video that resembles what you imagined. In fact, it sounded a lot like how OpenAI promotes its own AI video creator, Sora.


You may like

Though the videos Sora makes are usually gorgeous, they are also sometimes unreliable in quality. One scene might be perfect, and the next might have characters floating like ghosts or doors leading to nowhere.

Magic movie

Runway Gen-4 pitched itself as video magic, so I decided to test it with that in mind and see if I could make videos telling the story of a wizard. I devised a few ideas for a little fantasy trilogy starring a wandering wizard.

I wanted the wizard to meet an elf princess and then chase her through magic portals. Then, when he encounters her again, she’s disguised as a magical animal, and he transforms her back into a princess.

The goal wasn’t to create a blockbuster. I just wanted to see how far Gen-4 could stretch with minimal input. Not having any photos of real wizards, I took advantage of the newly upgraded ChatGPT image generator to create convincing still images.

Sora may not be blowing up Hollywood, but I can’t deny the quality of some of the pictures produced by ChatGPT. I made the first video, then used Runway’s option to “fix” a seed so that the characters would look consistent in the videos. I pieced the three videos into a single film below, with a short break between each.

AI Cinema

You can see it’s not perfect. There are some odd object movements, and the consistent looks aren’t perfect. Some background elements shimmered oddly, and I wouldn’t put these clips on a theater screen just yet. However, the characters’ actual movements, expressions, and emotions felt surprisingly real.

Further, I liked the iteration options, which didn’t overwhelm me with too many manual options but also gave me enough control so that it felt like I was actively involved in the creation and not just pressing a button and praying for coherence.

Now, will it take down Sora and OpenAI’s many professional filmmaker partners? No, certainly not right now. But I’d probably at least experiment with it if I were an amateur filmmaker who wanted a relatively cheap way to see what some of my ideas could look like. At least, before spending a ton of money on the people needed to actually make movies look and feel as powerful as my vision for a film.

And if I grew comfortable enough with it and good enough at using and manipulating the AI to get what I wanted from it every time, I might not even think about using Sora. You don’t need to be a wizard to see that’s the spell Runway is hoping to cast on its potential user base.

You might also like

Originally Appeared Here

You May Also Like

About the Author:

Early Bird