AI Made Friendly HERE

AI video wars heat up: Pika adds Lip Sync powered by ElevenLabs

Even as OpenAI continues to impress by releasing new demo examples of its high-quality AI video generation model Sora, it still remains out-of-reach to the public for now. But existing AI video generator companies aren’t sitting still: today, rival Pika announced the release of a new feature for its paying subscribers called Lip Sync.

The feature allows users to add spoken dialog to their videos with AI-generated voices from separate generative audio startup ElevenLabs, while also adding matching animation to ensure the speaking characters’ mouths move in time with the dialog.

With ElevenLabs powering it, the new Pika Lip Sync feature supports both text-to-audio and uploaded audio tracks, meaning a user could type out or record what they want their Pika AI generated video characters to say, and change the style of the voice that says it.

As stated above, the feature is limited for now in “early access” to Pika Pro users (a $58-per-month subscription offering billed for 12 months up front at $696) or members of Pika’s “Super Collaborators” invitation-only program available through its Discord group.

VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

 

Request an invite

Removing a big barrier to full AI narrative films

While Pika’s AI generated videos remain arguably lower quality and less “realistic” than the ones shown off by OpenAI’s Sora or even another rival AI video generation startup, Runway, the addition of the new Lip Sync feature puts it ahead of both in offering capabilities disruptive to traditional filmmaking software.

With Lip Sync, Pika is addressing one of the last remaining barriers to AI being useful for creating longer narrative films. Most other leading AI video generators don’t yet currently offer a similar feature natively.

Instead, in order to add spoken dialog and matching lip movements to characters inside the AI video, users have had to make do with third party tools and cumbersome additions in post production, which give the resulting video of a “low budget,” Monty Python-esque quality.

Separately but semi-relatedly, this week Runway also updated its Multi Motion Brush feature. That feature was introduced last month and allows users to add up to five independent motion directions to different objects and scenery in their video — e.g. a dog jumping up (1) to catch a frisbee moving sideways (2). Now, Runway is adding region detection, which will seek to automatically highlight and select different objects to apply motion to without a user having to manually “paint” over them with the brush (though they can still do so if they wish).

? NEW: Motion Brush Region Detection in @runwayml

Motion Brush has recieved a QoL update! Now, you can automatically select areas in your image without manually brushing over them!

Share your top Motion Brush videos below! ?? pic.twitter.com/eYFuyggvKS

— Nicolas Neubert (@iamneubert) February 27, 2024

Pika also allows users to edit components of their videos and expand the canvas, though it does not provide a similar “brush” tool at the moment, making its motion controls less granular.

Concerns and questions still swirl around AI video training data

However, not everyone was excited about the new Pika feature. Ed Newton-Rex, CEO and founder of a new AI certification nonprofit organization called Fairly Trained — dedicated to ensuring AI models seek consent from creators and data holders to train on their work — and himself formerly the VP of Audio at Stability AI, used the occasion of Pika’s new Lip Sync feature to inquire on X what the company trained its video model on.

What is your video model trained on?

Lots of creators are concerned their work is being exploited without their permission. Clarity on the data you’ve used would greatly allay their concerns.

— Ed Newton-Rex (@ednewtonrex) February 27, 2024

Regardless of these questions and concerns, video AI generator companies show no signs of slowing down in their introduction of new features and ever higher-quality video generations, leading to a veritable “arms race” between them. That’s good for users of this tech, but it has many in the professional filmmaking community concerned, including writer/director Tyler Perry, who was widely criticized for announcing a halt to a planned $800 million expansion of his production studio after viewing Sora-generated videos, stating he expected jobs to be lost by the tech.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Originally Appeared Here

You May Also Like

About the Author:

Early Bird