AI Made Friendly HERE

I just put Luma’s new Ray2 AI video generator to the test — and it’s better than Sora

Luma Labs has given its popular Dream Machine AI creativity platform a major upgrade, bringing the new Ray2 video model into the system. This is a huge upgrade over the previous Ray 1.6, offering better realism and more natural motion.

Ray2 was announced last year as part of a new partnership with Amazon AWS. It has finally been integrated into Dream Machine, available as the default option when you create a video.

The AI startup describes Ray2 as “a new frontier in video generative models.” To achieve the level of visual and motion realism they increased compute power 10x compared to previous models, which has “unlocked new freedoms of creative expression and visual storytelling.”

I’ve been testing Luma’s Ray 2 since launch and the video generations are very impressive, but it’s slow due to demand with some issues around clips refusing to generate or taking too long to be useful. These are all the same teething problems any platform has when launching a new model. So I’d say Ray2 is certainly in the running to be one of the best AI video generators available.

Putting Ray2 to the test

Being built into Dream Machine already gives Ray2 a leg up compared to other video models because of how impressive Dream Machine is to work with. It makes creating content with AI more collaborative and less about throwing a prompt into the wind and hoping for the best.

Accessing Ray2 is as simple as starting a new board in Dream Machine, selecting Video from the prompt bar and typing your prompt. The AI will handle the rest, showing you two videos and giving the usual adaptable interface where you can change elements of the prompt.

Due to the issues I mentioned earlier I was only able to get about half of the prompts I tried to actually generate, and because of how slow it was I couldn’t make use of the re-prompting and collaboration features that make Dream Machine so good. Despite that, I was still impressed.

In one example I asked Luma’s Ray2 to create a video of a knife slicing into an onion. Knife skills is something no video model — with the exception of Google Veo 2 — has been able to do consistently well. While it wasn’t perfect, the motion was spot on and it did slice.

Ray2 is also particularly good at animal motion. I asked it to generate several videos of dogs — including one stretching and another catching butterflies — and it did both very well. There were some elements of the butterfly video that were not perfect, but with Dream Machine that can be relatively easily corrected by replying to the original video and specifying what to change.

When I was able to get videos to generate it happened extremely fast. It doesn’t look like Luma had to sacrifice the speed of generation (something Dream Machine was famous for) in favor of improving quality. We seem to get performance and speed all in a single model.

Luma Labs Ray2 Final thoughts

Overall it does appear that Ray2 is a significant step forward in generative video. Its leap feels very similar to the jump we saw when Luma first launched Dream Machine. It is also slightly better in terms of motion than OpenAI’s flagship Sora model.

It is not perfect. There are still artifact issues; sometimes the motion doesn’t make sense; and its currently only text-to-video (although image-to-video is coming soon). However, these are all problems that plague every other model I’ve tried, including Sora, Runway, Kling and Pika.

The biggest takeaway is just how fast AI video is evolving. Being able to generate ten seconds of high resolution video nearly indistinguishable from something filmed with a camera would have been unthinkable two years ago. Today, its commonplace and available from multiple companies.

More from Tom’s Guide

Originally Appeared Here

You May Also Like

About the Author:

Early Bird