Odyssey, an artificial intelligence (AI) startup founded last year, shared details about its first AI product on Monday. The firm revealed that it is building an AI video model that can create Hollywood-grade visual effects, just like OpenAI’s Sora tool that is yet to be released by the company. Odyssey’s co-founder says that the AI model will let users edit and control the output at a granular level, adding that the firm is working with multiple large language models (LLMs) to generate different layers of the output video, which can be controlled separately.
How Odyssey’s AI Visual Model Works
In a series of posts on X (formerly Twitter) Odyssey CEO and Co-Founder Oliver Cameron said that the AI startup had raised $9 million (roughly Rs. 75.1 crores) in its seed round funding led by Google Ventures and was currently building a tool that would deliver high-quality video that could be customised and edited by users.
Cameron also shared details about Odyssey’s AI technology and claims that it was designed to generate “Hollywood-grade” video. The executive also said that the startup was training four generative models to users to take “full control of the core layers of visual storytelling”.
Individually, each model will enable you to precisely configure the minutia of your scene.
Combined, these models will generate video or scenes, but exactly as you wanted.
Going further, our model outputs integrate into existing pipelines in use in Hollywood and beyond. pic.twitter.com/jHZoevLV9n
— Oliver Cameron (@olivercameron) July 8, 2024
Cameron highlighted the problem in existing AI text-to-video models, which is the lack of tools or options to control or edit the output. “As a storyteller, you have little ability to direct your environment or characters, or to iterate on the finer details of your shot until it’s just right. More powerful models are required,” he added.
To solve the problem, the company is using multiple AI models that will generate a single layer of the composite video. As per Cameron, four models will independently generate geometry, materials, lighting, and motion. These four layers will be generated simultaneously based on a single text prompt and then combined to present the final video.
The company claims that users will have the option to configure each layer separately for greater control over the output. Odyssey will also integrate its video outputs into existing Hollywood tools and systems used to generate visual effects.
Currently, the AI video model is in its early development stage. There is no launch date for the AI model. However, the company has highlighted that it will keep sharing regular updates about its progress. Notably, Cameron has previously worked for Cruise and Voyage, two startups working with self-driving vehicles.
Jeff Hawke, the other Co-Founder and the CTO of the company was previously working as the Vice President of Technology at Wayve, an AI firm which is developing autonomous driving systems.
Affiliate links may be automatically generated – see our ethics statement for details.
Originally Appeared Here