The second annual AI Film Festival, held last week by generative AI company Runway, provided a glimpse at how artificial intelligence may impact the future of content creation.
Screened in Los Angeles (with a second screening slated for May 9 in New York), the 10 honored shorts were each created with at least some use of AI. And while even the competition’s Grand Prix winner acknowledged that the toolset is not there yet — “the process is currently tedious and imperfect” — the results did signal the speed at which creatives are starting to experiment with the tech.
The events serve not only to showcase AI use but to begin to build a community around interested creatives. Runway co-founder and CEO Cristóbal Valenzuela said the number of entries grew from nearly 300 last year to nearly 3,000.
Research from global insights firm NRG suggests that about half of “creative class professionals” (an estimated 50 million U.S. workers) have at least experimented with AI, and most found that time savings was the biggest benefit to AI use.
Runway reported that all festival finalists used gen AI in some capacity — some to design and create characters and/or environments and some to animate scenes. Others chose to utilize AI tools in the editing process, and some combined all of these techniques in their submission.
Few live-action shorts were among the festival entrants, but one was Grand Prix winner “Get Me Out,” from writer, director and editor Daniel Antebi, which was lensed with an ARRI Alexa Mini and combined with VFX for a short/PSA aimed at suicide prevention.
In the piece, the protagonist dances and wrestles with a “second self,” which was performed by a second actor and “re-skinned with AI to make it look as if he had no skin, as if he was made purely of muscle, to represent him wrestling with his inner self,” explained Antebi. “The emotionality of their human performances is what makes our use of AI compelling.”
Antebi also addressed the issue of ethics, saying, “I believe it’s crucial we are thorough and transparent in our use of our tools.” His short used Luma AI to capture and re-create 3D environments, Runway’s video-to-video tool Gen-1 and ComfyUI to create the second self.
“We have a complete list of references to other videos and artists that inspired us at the end of the credits,” Antebi said.
Only a handful of the 10 shorts screened during the event included end credits, and of those that did, the credits were fairly short, seeming to underscore concerns surrounding job security.
Asked about this issue, Runway’s Valenzuela drew a distinction between jobs and tasks. ”I think we are going to see a massive change in the tasks over the next couple of years,” he said. “The way you edit films, the way you make videos … it’s going to become easier, faster and cheaper. But the job — who knows how to use those tools in effective ways — will still be there.”
Potential cost savings is difficult to quantify, as the quality of creative work can vary greatly, and each production has its own unique requirements.
SEE ALSO: Survey Shows Consumers Support AI Regulations
Antebi said the budget for his six-and-a-half-minute entry was about $3,000, but without favors and donated gear he expects it would likely have been closer to $50,000 to produce.
As for the tech roadmap, Runway co-founder and CTO Anastasis Germanidis said the company is aiming to make AI-created imagery more photoreal and AI tools more controllable. AI development of course is moving quickly, and he hinted that some new features could be introduced later this year.
VIP+ Explores Gen AI From All Angles — Pick a Story