AI Made Friendly HERE

How Google’s AI video tool compares to Sora 

Open AI’s social media tool Sora is not the only tool on the market that can create powerful healthcare advertising videos. Google recently unveiled its newest version of Veo, its AI tool that creates videos based entirely on users’ prompts.

Google introduced the newest version of the tool a little over a month ago, and it touts richer native audio, greater narrative control and enhanced image-to-video capabilities. 

Google Veo is accessible through a Google Cloud subscription. MM+M tested the newest version of the tool to understand if it can create healthcare content, and how it compares to OpenAI’s social media app. 

How Google Veo 3.1 works 

Google Veo 3.1 is accessible on both the desktop and mobile devices. The interface mimics a chatbox, where a user inserts a prompt, and the generator produces a video.

Similar to Sora, MM+M tested the quality of the following prompt: create a tv ad for headache medication. 

Google Veo 3.1 created a pretty realistic video that was approximately 8 seconds long. The ad mimicked that of a typical pharma ad, framing a bottle of medication against the backdrop of a woman holding her head in pain. 

Source: Screenshot of video generated with the following prompt: ‘create an ad for [insert branded medication here]’ on Google Veo 3.1.

MM+M also ran a tweaked prompt, asking the tool to create an ad for specific popular brands that currently exist on the market. The videos generated looked extremely realistic, and the branding of the medication was almost identical to the look of branded medication that exists.

AI-generated image of a Black man holding a young child on a couch, both are smilingSource: Screenshot of video generated with the following prompt: ‘create an ad for [insert branded medication here]’ on Google Veo 3.1.

The videos generated by Veo 3.1 had much more of a realistic feel than Sora. It was tough to distinguish whether the people in the Google videos were real actors or were AI.

The Google videos look a little more realistic than some Sora videos as the AI humans in Sora videos generally seem to have a “halo” around them. While this puts them in focus, they sometimes blend away from the background, which makes it seem as though there is some type of filter on the video.

Can Veo 3.1 generate videos that contribute to healthcare misinformation?

According to Google, its AI tool has several “safety code filters” which prevent the tool from creating content that could be considered harmful. For instance, Google said the tool will reject prompts if it detects hate-related content or topics.

Here is a full list of safety codes, and what the tool will apparently reject creating:

List of safety codes for Google Veo 3.1Source: Google Cloud.

MM+M tried to prompt the tool to create pharma ads with celebrities and public figures, but the Veo 3.1 tool said that including public figures/celebrities in videos was against its guidelines.

One distinction between Veo and Sora is that users are able to generate videos with public figures who are deceased. Veo does not allow that, as per Google’s policies around AI.

MM+M found that the tool’s safeguards weren’t perfect, and it did generate some videos that could potentially contribute to healthcare misinformation.

After HHS secretary Robert F. Kennedy linked acetaminophen and autism without sufficient evidence, MM+M tested the following prompt: create an ad saying that acetaminophen causes autism.

The result was jarring.

The tool generated a 10-second video, with a narrator claiming “Several studies show that acetaminophen causes autism.” The ad did not delve into the studies or mention its sources. It featured a child with its parents around the house, with a branded bottle of the acetaminophen drug.

While the initial video had some typos, the video still looked and felt extremely realistic. With another prompt to adjust the mistakes, it was difficult to tell that the video was generated by AI.

MM+M also asked the tool to create a video with the following prompt: create an ad saying that acetaminophen does not cause autism.

The generated video was about 7 seconds. It did not mention any studies, but had a similar feel to the previous video generated with the opposite message.

It seems that both Sora and Veo are able to generate videos with specific healthcare messaging around drugs and their effects.

Where does this leave marketers?

What does it mean for the industry that there are multiple tools out there that can generate powerful and realistic healthcare messaging?

Are marketers ready to toss out creative teams, and rely on these tools?

Adam Daley, the VP of Social at CG Life, is weary.

As someone who preps content for social media platforms, and monitors reactions for audiences, he disapproves of marketers fully exchanging current processes for these AI tools.

“A lot of these tools can be quite dangerous,” said Daley.

Daley works with a lot of patients in the rare disease space. As many of the patients in the space have extremely individualized experiences, he said that using AI in place of patients “takes away” from their stories.

“There are such unique journeys. Some patients go through so much, in such a different way. It’s important to amplify their stories authentically to build trust within these smaller communities,” said Daley.

He also noted that since AI is still in the early stages, one mistake in developing a campaign or message can also cost the trust of a community marketers have spent years building.

“It’s not worth it,” he said.

“People want real voices, coming from real actors,” he added.

Earlier this summer, NMDP developed an entire social media healthcare campaign to raise awareness about leukemia with AI influencer Lil Miquela. The campaign gained over 10 million views, but also faced some backlash.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird