Google will require all “all verified election advertisers” to “prominently disclose” when their ads use artificial intelligence amid the growing threat of misinformation posed by the exploding technology, the tech giant said.
The company’s updated political content policy – which goes into effect in November, a year before the 2024 presidential election — will demand a “clear and conspicuous” notice “in a location where it is likely to be noticed by users” when ads make use of “synthetic content that inauthentically depicts real or realistic-looking people or events,” Google announced Wednesday.
“For years we’ve provided additional levels of transparency for election ads, including ‘paid for by’ disclosures and a publicly available ads library that provides people with more information about the election ads they see on our platforms,” a Google spokesperson told The Post on Thursday.
“Given the growing prevalence of tools that produce synthetic content, we’re expanding our policies a step further to require advertisers to disclose when their election ads include material that’s been digitally altered or generated.”
Google’s decision comes as other major AI firms warned the technology can be used to undermine elections.
On Thursday, Microsoft — backers of OpenAI and its wildly popular ChatGPT — said its researchers found what they believe is a network of fake, Chinese-controlled social media accounts seeking to influence US voters by using artificial intelligence.
Google on Wednesday updated its political content policy. As of mid-November, political campaign ads will have to disclose if they make use of “synthetic content that inauthentically depicts real or realistic-looking people or events.”ZUMAPRESS.com
A Chinese embassy spokesperson in Washington said that accusations of China using AI to create fake social media accounts were “full of prejudice and malicious speculation” and that China advocates for the safe use of AI.
Political ads using generative AI tools like ChatGPT, DALL-E, and Google’s own Bard, have already been deployed — including a deepfake image of Donald Trump resisting arrest and of his wife Melania yelling at police.
Trump was also depicted hugging Dr. Anthony Fauci during the COVID-19 pandemic in a campaign ad using AI by GOP presidential nominee rival Ron DeSantis in June.
President Joe Biden has also been the target of several ads using AI, including one by the Republican National Committee in April that showed him celebrating with Vice President Kamala Harris after winning reelection to a second term.
The 30-second clip showed Biden and Harris then cut to harrowing scenes of China invading Taiwan, shuttered US banks, and cities overrun by crime.
OpenAI boss Sam Altman has previously warned that he’s “nervous” about the ways his own tech could disrupt elections, calling it a “significant area of concern” that required federal regulation.
AI-generated images of politicians have already begun circulating social media, including one of Donald Trump being arrested.Twitter / Eliot Higgins
AI images of Trump’s wife, Melania, were also generated in a bid to fool viewers.Twitter / Eliot Higgins
There’s currently little regulation governing the use of AI in the US, and it was only last month that the Federal Election Commission began a process that could result in the regulation of AI-generated deepfakes in political ads.
Senate Majority Leader Chuck Schumer has also slated an AI summit for next week with some of the biggest names in tech.
Biden has warned that unfettered AI could pose a threat “to our democracy,” but he also oddly declared back in July: “I am the AI.”