AI Made Friendly HERE

AI campaigning promises to be a deciding factor in 2024 presidential race

New tech tools have powerful potential to dramatically alter the public’s perception of political candidates, and experts say AI-manipulated images and videos will be a deciding factor in the presidential race.

The candidate who best harnesses new generative artificial intelligence products, they say, will have an advantage in 2024 that will be difficult to overcome for those who stick with traditional politicking.

Previously, the advent of social media apps helped transform newbie Sen. Barack Obama and reality TV star Donald Trump into leaders of the free world. Now, the 2024 presidential AI race is whipping into high gear.

When President Biden formally announced his reelection run, the Republican Party responded with an attack ad branded with a label saying it was “built entirely with AI imagery.” It was, perhaps, the first time a major political party had solely relied upon AI tools in an ad.

The “Beat Biden” video imagines a dystopian world in a hypothetical second term for Mr. Biden, with sirens blaring as a narrator says China has invaded Taiwan, followed by the announcement of closed banks, a southern border overrun by immigrants, and the shuttering of San Francisco due to rampant crime.

“What if the weakest president we’ve ever had were reelected?” the ad says over AI-generated images of Mr. Biden and Vice President Kamala Harris celebrating their reelection.

A recognition has set in among political consultants that voters are not ready for the full onslaught of AI-fueled political content — but it is expected to arrive in the 2024 contests.

The American Association of Political Consultants has condemned the use of AI deepfake technology in political campaigns.

Deepfakes refer to content that includes manipulated or generated audio, images or videos that appear to be real. The AI-made content may replicate someone’s voice, map someone’s face onto another’s body or otherwise trick an audience.

AAPC president R. Rebecca Donatelli said in May that her board unanimously agreed that deepfake AI content is a “dramatically different and dangerous threat to democracy.” The political consultants’ industry association adopted a new policy prohibiting the use of deepfake generative AI content, saying it was contrary to its code of ethics.

The association’s new policy may give pause to campaign staffers and political activists but is unlikely to dissuade legions of online trolls who thrive on provocation and respect no rules.

Foundation for American Innovation senior economist Samuel Hammond, who studies AI policy, said content creators outside of the campaigns who use the tools are going to be major players who determine how voters perceive candidates.

“The real action is going to be outside of the campaigns, just the whole world of online [trolls] and people who aren’t under federal elections laws and stuff like that, just doing whatever they want,” Mr. Hammond said. “Blowing up the information space with a whole bunch of nonsense, it’s going to be hard to distinguish from reality.”

Such content already exists and is capturing people’s imagination. One viral deepfake video that depicted Florida Gov. Ron DeSantis as the character Michael Scott in “The Office” wearing a woman’s suit was shared on Twitter by Mr. Trump’s son, Donald Trump Jr. The AI-manipulated clip was originally posted by the Twitter account @C3PMeme.

The former president also published on Instagram an AI-powered spoof of Florida Gov. Ron DeSantis’ 2024 campaign launch via Twitter. Mr. Trump’s Instagram video depicted Mr. DeSantis having a conversation on Twitter with the platformer’s owner Elon Musk alongside other characters such as Adolph Hitler and the devil.

The avalanche of AI content is not limited to the absurd and satirical.

Mr. DeSantis’ rapid response team shared a video this month on Twitter featuring images of Mr. Trump embracing Dr. Anthony Fauci as part of a montage embossed with the words “Real Life Trump.” The montage includes images that have the markings of fakes, such as incomprehensible text and differences in the shape of an ear, according to an analysis by Agence France-Presse.

A source with knowledge of the DeSantis team’s operation said the video was not an ad and Mr. Trump’s team previously posted bogus images of the Florida governor.

The failure of political campaigns to always disclose when they are using AI tools in their content is emerging as a distressing trend, according to Center for Democracy & Technology senior technologist William T. Adler.

“I think it’s really a lot to ask of people — of users and voters — to burden all of the responsibility of detecting when images are synthetic,” Mr. Adler said. “Because there are some tells in those images but this technology is just going to keep getting better.”

AI-makers are aware of potential problems caused by using their tools for politicking. OpenAI, the developer of the popular chatbot ChatGPT, disallowed the use of its AI models for political campaigning, according to a March update of its usage policies.

Elected officials, meanwhile, are eager to make new AI rules, which may ultimately shape the tools’ usage in future campaigns.

Mr. Biden’s White House is crafting a new National AI Strategy, while Senate Democrats are leading legislative efforts to author new regulations from Capitol Hill.

At the state level, Washington Gov. Jay Inslee has signed a law that mandates a notification when manipulated media such as deepfakes are used for political campaigns.

Mr. Inslee, a Democrat and a failed 2020 presidential candidate, signed the law last month. It requires the disclosure to be placed on manipulated media in text, audio and visual content.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird