AI Made Friendly HERE

Runway AI’s Gen-3 Alpha text-to-video model beats OpenAI’s Sora: 9 incredible videos | Technology News

Artificial Intelligence is progressing at a brisk pace and so is its adoption. On one hand, the threat of AI replacing jobs looms large, while on the other hand, it is showcasing numerous ways to amplify human creativity. US-based Runway AI has introduced its latest AI model Gen-3 Alpha. The company claims it to be ‘a new frontier for high-fidelity, controllable video generation.’

Gen-3 Alpha is the first in the upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training. Runway claims that the new model is a major improvement in fidelity, consistency, and motion over Gen-2. It is its step towards building General World Models – the next major advancement in AI as these will be systems that will understand the visual world and its dynamics. 

Ever since the launch of Gen-3 Alpha model, internet users have been sharing their unique creations with the world. These high-definition videos showcase the versatility and range of the new AI model from Runway AI. Here is a look at some spellbinding videos by Gen-3 Alpha.

Create your monster fiction  

A text-to-video model like Gen-3 Alpha can truly amplify your creativity. A user on X (formerly Twitter) known as Uncanny Harry AI used the model to create a short video of a fictional monster rising from the Thames River in London. The video shows a ‘hideous monster’ rising from the river evoking the famed Godzilla or Kaiju. The 11-second clip is cinematic with a grim London scene under a cloudy sky, and the monster slowly rising above the fierce waves.

“a cinematic shot of a hideous monster rising from the river Thames in London” Gen 3 pic.twitter.com/X31GQLOSL7

— Uncanny Harry AI (@Uncanny_Harry) June 28, 2024

Time lapse pencil drawing

Another user, Anu Akash, who claims to be ‘exploring AI tools’ in her bio on X, shared a short video generated by Gen-3 Alpha where a pencil drawing of a girl is shown in time-lapse. Akash used the prompt describing the top view time-lapse video of a pencil artwork drawn by hand. She described it as an art of a girl with rabbit hair from beginning to end. The user also acknowledged that the hair was a typo in the prompt that she gave as she intended it to be “rabbit-like ears”. However, she seemed pleased with Gen-3 Alpha’s output.

runway gen3

top view timelapse video of a pencil artwork drawn by a hand, it is an art of a girl with rabbit hair from beginning till the finish

*rabbit hair is a typo –I intended rabbit like ears, but gen3 got it pic.twitter.com/G9FpZYo3kZ

— Anu Aakash (@anukaakash) June 29, 2024

A floral storytelling

Gen-3 Alpha can materialise even your wildest dreams. Martin Haerlin, another X user, used the model to create a visual carousel of flowers. One could see the unfurling of pink and red petals of flowers over a megacity, guns shooting flowers of all colours and sizes, a warrior’s bow turning into a sunflower, daisies floating in the air, soldiers and martial artists manoeuvring flowers. In his post, Haerlin exclaimed that with Gen-3 Alpha it felt like his toolset for storytelling has been supercharged and upleveled by leaps.

It feels like my toolset for storytelling has been supercharged and upleveled! By a lot!
Thanks to @runwayml Gen-3
🦸🧨🔥💣
Song is from @UppbeatOfficial, “Gotta be free” from All Good Folks pic.twitter.com/upRC6kcNpM

— Martin Haerlin (@Martin_Haerlin) June 29, 2024

Create your sci-fi movie

Gen-3 Alpha could potentially turn your sci-fi ideas into reality. Former Google Maps AR/VR creator, Bilawal Sidhu, took to his X account to share his experiments with Runway AI’s Gen-3 Alpha. In a long thread of videos, he praised the AI model for its impressive particle simulation visuals, light interaction effects, and complex camera movements in some cases.

For starters, you can get some amazing particle simulation effects w/ gen-3

“the moment the whole universe came into existence”
“ripping through the fabric of space and time” pic.twitter.com/g8HXawfjxT

— Bilawal Sidhu (@bilawalsidhu) June 29, 2024

Sidhu also highlighted the Gen-3 Alpha’s ability to maintain high-frequency detail, First-Person shooter-style video generation, and exert control using text prompts regardless of the imperfect physics. The creator also noted realistic motion graphics, physics, and city visualisation. Although he found human renderings good, he stated they were difficult to control. Sidhu said that heads-up displays and augmented reality prompts were realistic. 

Text prompts to control camera speeds

AI art enthusiast vkuoo shared a unique creation by Gen-3 Alpha. This is perhaps a first in AI text-to-video generation. The user showcased a demo where he is shown controlling camera speeds using text commands. When one of the users requested the prompt he used to create the video, vkuoo responded with the prompt – “Ultra-fast disorienting hyper-lapse racing through a tunnel into a labyrinth of rapidly growing vines. The tunnel lights flicker at high frequency, and the vines quickly grow to block the path. Rapid camera movement with intense focus shifts.”

Check this out: controlling camera speed through text commands! 🎥 @runwayml #gen3 pic.twitter.com/luP52r0jCr

— vkuoo (@vkuoo) June 29, 2024

A video of a cruising sports car

Heather Cooper, whose bio describes her as an AI educator and consultant, shared a stunning short video of a sports car wading through wet pavement. The video shot at a low-angle shows the futuristic car moving through a street flanked by neon lights. Cooper used the prompt – “Low-angle tracking shot following a sleek sports car with neon lights reflecting off the wet pavement.”

Gen-3 is wild 👀

Prompt: Low-angle tracking shot following a sleek sports car with neon lights reflecting off the wet pavement@runwayml pic.twitter.com/ig6jtFoOoM

— Heather Cooper (@HBCoop_) June 28, 2024

Rich details and realistic lip sync

Chrissie, another X user who is an AI video creator, shared a short clip created using Gen-3 Alpha. The clip shows a woman walking and speaking about Gen-3 Alpha. The user noted that Runway AI’s Gen-3 Alpha’s lip sync abilities are fun. “Look at her expression as she gives that light little shimmy at the end lol,” wrote Chrissie

@runwayml GEN-3 and lipsync is so fun. Look at her expression as she gives that light little shimmy at the end lol pic.twitter.com/RuXsB9NZE1

— Chrissie (@pressmanc) June 29, 2024

Hyper realistic visuals 

Digital artist and filmmaker, Christopher Fryant, shared a 53-second short film called ‘This Town isn’t Real’. Fryant used the Gen-3 Alpha model with some additional editing and sound design by him. Fryant said that the output is entirely text-to-video. The video footage shows the camera panning through a night scene showing people in motion. At first, it may appear like real footage. 

“This Town Isn’t Real”

Found footage style short, animated with @runwayml‘s new #gen3 video AI tool (early access, available to CPP members currently).

This one is entirely text to video, with some editing and sound design by me. pic.twitter.com/2FKn77nHb8

— Christopher Fryant (@cfryant) June 29, 2024

Flying through time and landscapes

Blaine Brown, whose X bio says he is an Innovation leader, tried Gen-3 Alpha for the first time. Brown took to his X account to share the output. His prompt read – “A fly through a castle in Ireland that becomes a futuristic cyberpunk city with skyscrapers.” The video created by Gen-3 Alpha is rich in detail as it accurately depicts the castle’s corner towers, its cobblestone walkways, and a smooth transition into a cyberpunk city with shimmering skyscrapers. 

Holy smokes! @runwayml #Gen3 is out for creative parters. This is my first attempt & I’m already in love 😍

“A fly through of a castle in Ireland that becomes a futuristic cyberpunk city with skyscrapers” pic.twitter.com/SKcfwMDJE9

— Blaine Brown  (@blizaine) June 28, 2024

AI video models are a testament to the potential AI holds in the field of visual communication. Earlier this year, OpenAI shocked the world with its superior text-to-video model Sora. While AI video models have been persisting, in recent times more and more AI start-ups are coming up with their AI models which are essentially outdoing their predecessors. 

Festive offer

Based on the above creations from various users, it seems Runway’s Gen-3 Alpha is on par with Sora, even exceeding it in some cases based on the video samples shared by OpenAI. Sora is not yet available. Former CEO of Stability AI, Emad Mostaque also shared a post drawing comparisons between Gen-3 Alpha and Sora. 

50mm & 5s is a pretty good keyword for @runwayml Gen-3

Still failure points but improving fast, sure scale & feedback will iron them out pic.twitter.com/Fh1JdAuGZg

— Emad (@EMostaque) July 1, 2024

Runway AI is among one of the earliest startups to work on AI for video generation. The Gen-3 Alpha which is now generally available allows users to make hyper-realistic AI videos from text, images or even video prompts. Those signed up with RunwayML platform can use the model’. While Gen-1 and Gen-2 were free models, to use Gen-3 users will have to buy a subscription starting from $12 per month/per editor. 


Originally Appeared Here

You May Also Like

About the Author:

Early Bird