A recent social media post made what appeared to be a surprising announcement about the 2024 presidential race.
“Breaking news: Donald Trump has declared that he is pulling out of the run for president,” the narrator said in a Feb. 22 TikTok video, which was viewed more than 161,000 times before TikTok removed it from the platform.
Of course, Trump hasn’t dropped out of the 2024 race. As of March 12, he’d secured enough delegates to clinch the Republican presidential nomination.
Besides the false claim, the video had another distinctive element: Its audio appeared to be made with artificial intelligence.
PolitiFact identified several TikTok accounts spreading false narratives about the 2024 election and Trump through its fact-checking partnership with TikTok.
Prep for the polls: See who is running for president and compare where they stand on key issues in our Voter Guide
(Read more about PolitiFact’s partnership with TikTok.)
Videos on YouTube made similar false claims. These videos, which also appeared to have AI-generated audio, were reshared with lower engagement on Facebook and X.
Generative AI is a broad term that describes when computers create content, such as text, photos, videos and audio, by identifying patterns in existing data. It is likely to play an outsized role in national elections that the U.S. and more than 50 other countries are holding this year.
In February in Indonesia, AI-generated cartoons helped rehabilitate the image of a former military general linked to human rights abuses who won the country’s presidential election.
Recent advances in generative AI have made it harder to determine whether online content is real or fake. We talked to experts about how this technology is changing the information landscape and how to spot AI-generated audio. Unlike video, there aren’t visual abnormalities that can hint at manipulation.
When contacted for comment, a TikTok spokesperson said, “To apply our harmful misinformation policies, we detect misleading content and send it to fact-checking partners for factual assessment.” Once alerted to the content highlighted in this story, TikTok removed it from the platform.
Experts analyze videos’ use of AI-generated audio
We asked generative AI experts to analyze five TikTok videos that made false and misleading claims across multiple accounts to determine whether we’d accurately surmised that the audio was AI-generated.
Hafiz Malik, a University of Michigan-Dearborn electrical and computer engineering professor who studies deepfakes, said his AI detection tool classified four of the videos as having AI-generated audio. Some parts of the fifth video were labeled as “deepfake,” while others weren’t.
Siwei Lyu, a University at Buffalo computer science and engineering professor who specializes in digital media forensics, said his AI detection algorithms also classified a majority of the five videos as using AI-generated audio.
TikTok videos make outrageous claims
Trump was a focus of these TikTok accounts; several of the accounts used photos of the former president as their profile photos.
These accounts followed a similar playbook: eye-catching headlines often displayed against red backgrounds; videos featuring famous figures, including Trump and Supreme Court Justice Clarence Thomas; and a disembodied narrator.
Some videos made false claims about the well-being of high-profile people — that Trump had had a heart attack and New York Attorney General Letitia James had been hospitalized for gunshot wounds. James sued Trump in 2022, accusing him of fraudulently inflating his net worth; a judge ruled in February that Trump must pay a $454 million penalty.
Another video displayed text that said, “Supreme Court Justice Clarence Thomas joins other justices to remove 2024 race candidate.” This appears to misleadingly refer to the Supreme Court’s case on Trump’s ballot eligibility, in which Thomas and the other justices unanimously ruled that individual states cannot bar presidential candidates, including Trump, from the ballot.
All of these TikTok accounts were created within the past few months. The one that appeared the oldest had videos dating to November; the newest began posting content March 1. Three of the accounts had TikTok’s default username of “user” followed by 13 numbers.
These accounts collectively posted hundreds of videos and garnered hundreds of thousands of views, likes and followers before TikTok removed the accounts and videos.
We also found YouTube videos making false claims that mimicked the TikTok videos’ format: sensational headlines, Trump photos, audio that sounded AI-generated. Most of them were viewed hundreds or thousands of times.
The two YouTube accounts that posted the videos were created in 2021 and 2022 and amassed tens of thousands of followers before we contacted YouTube for comment and the company removed the accounts from the platform.
Before YouTube removed the videos, they were reshared on other social media platforms, including Facebook and X, where very small numbers of people liked or viewed them.
(Read more about our partnership with Meta, which owns Facebook and Instagram.)
A YouTube spokesperson did not provide comment for this story by our deadline.
How generative AI is contributing to more misinformation online
Misinformation experts say the newest generations of generative AI are making it easier for people to create and share misleading social media content. And AI-generated audio tends to be cheaper than its video counterparts.
“Now, anyone with access to the internet can have the power of thousands of writers, audio technicians and video producers, for free, and at the push of a button,” said Jack Brewster, enterprise editor at Newsguard, a company tracking online misinformation.
“In the right hands, that power can be used for good,” Brewster said. “In the wrong hands, that power can be used to pollute our information ecosystem, destabilize democracies and undermine public trust in institutions.”
NewsGuard reported in September 2023 that AI voice technology was being used to spread conspiracy theories en masse across TikTok. The report said Newsguard “identified a network of 17 TikTok accounts using AI text-to-speech software to generate videos advancing false and unsubstantiated claims, with hundreds of millions of views.”
A 2023 University of British Columbia study that used a dataset from TikTok found that AI text-to-speech technology simplified content creation, motivating content creators to produce more videos.
Study authors Xiaoke Zhang and Mi Zhou told PolitiFact that increased productivity means generative AI “can be deliberately exploited to generate misinformation at a low cost.”
The technology can also help users conceal their identities, which can “diminish their sense of responsibility towards ensuring information accuracy,” Zhang and Zhou said.
TikTok requires users to label content that contains AI-generated images, videos or audio “to help viewers contextualize the video and prevent the potential spread of misleading content.” TikTok’s community guidelines bar “inaccurate, misleading, or false content that may cause significant harm to individuals or society.”
No TikTok videos we reviewed had this generative AI label, although some included labels to learn more about U.S. elections. Brewster said NewsGuard also observed many TikTok users bypassing this policy about identifying AI-generated content.
YouTube’s community guidelines don’t allow “misleading or deceptive content that poses a serious risk of egregious harm.” YouTube requires disclosure for election advertising containing “digitally altered or generated materials.” The company said in December that it plans to expand this generative AI disclosure to other content.
How to detect AI-generated audio
Experts say existing AI detection tools are imperfect. They add that as detection tools improve, so does generative AI technology.
AI-generated audio lacks the more obvious visual cues of AI-generated images or videos, such as mouth movements that aren’t synced to audio or distorted physical features.
However, there are ways people can identify AI-generated audio.
Malik, the University of Michigan-Dearborn professor, said to listen for abnormalities in vocal tone, articulation or pacing.
“(AI-generated voices) lack emotions. They lack the rise and fall in the audio that you typically have when you talk,” Malik said. “They are pretty monotonic.”
Brewster also advised that the “old tactics” are still the best way to avoid AI-generated misinformation. Those include cross-checking information with other sites, being attuned to grammatical errors and odd phrasing, and searching for the names of those who posted to see if they have shared false information in the past.
Our sources
- Interview with Hafiz Malik, an electrical and computer engineering professor at the University of Michigan-Dearborn, March 7, 2024
- Email interview with Jack Brewster, enterprise editor at NewsGuard, March 7, 2024
- Email interview with Xiaoke Zhang, management and information systems PhD student at the University of British Columbia, and Mi Zhou, assistant professor of accounting and information systems at the University of British Columbia, March 8, 2024
- Email interview with Siwei Lyu, a computer science and engineering professor at the University at Buffalo, March 11, 2024
- TikTok video (archived), Feb. 22, 2024
- TikTok video (archived), March 4, 2024
- TikTok video (archived), Feb. 28, 2024
- TikTok video (archived), Jan. 31, 2024
- TikTok video (archived), March 5, 2024
- TikTok video (archived), Feb. 17, 2024
- TikTok account, accessed March 8, 2024
- TikTok account, accessed March 8, 2024
- TikTok account, accessed March 8, 2024
- TikTok account, accessed March 8, 2024
- TikTok account, accessed March 8, 2024
- TikTok, “New labels for disclosing AI-generated content,” Sept. 19, 2023
- TikTok, “Community Guidelines,” March 2023
- TikTok, “Global fact-checking program,” accessed March 13, 2024
- YouTube, “Letitia James has been rushed to the hospital after sustaining injuries for a fierce gun attack.,” Feb. 25, 2024
- YouTube, “Letitia James has been rushed to the hospital after sustaining injuries for a fierce gun attack.,” Feb. 25, 2024
- YouTube, “Trump suffers a heart attack while attacking President Biden,” March 10, 2024
- YouTube, “Congratulations to Nikki Haley. She has finally emerged as the Republican front-runner.,” March 4, 2024
- YouTube account, accessed March 12, 2024
- YouTube account, accessed March 12, 2024
- YouTube, “YouTube Misinformation Policies – How YouTube Works,” accessed March 13, 2024
- YouTube, “Our approach to responsible AI innovation,” Nov. 14, 2023
- YouTube, “Supporting the 2024 United States election,” Dec. 19, 2023
- PolitiFact, “What is generative AI and why is it suddenly everywhere? Here’s how it works,” June 19, 2023
- PolitiFact, “How generative AI could help foreign adversaries influence U.S. elections,” Dec. 5, 2023
- PolitiFact, “How to detect deepfake videos like a fact-checker,” April 19, 2023
- NewsGuard, “AI Voice Technology Used to Create Conspiracy Videos on TikTok, at Scale – NewsGuard,” Sept. 28, 2023
- University of British Columbia, “How does AI-generated voice affect online video creation? : evidence from TikTok,” April 2023
- The Associated Press, “2024 will see high-stakes elections in over 50 countries,” Jan. 10, 2024
- The Associated Press, “New AI voice-cloning tools ‘add fuel’ to misinformation fire,” Feb. 10, 2023
- The Washington Post, “AI voice clones mimic politicians and celebrities, reshaping reality,” Oct. 15, 2023
- Global Investigative Journalism Network, “How to Identify and Investigate AI Audio Deepfakes, a Major 2024 Election Threat,” Feb. 26, 2024
- NBC News, “Why AI-generated audio is so hard to detect,” Feb. 4, 2024
- NBC News, “Trump clinches delegate majority for GOP nomination, NBC News projects, setting up Biden rematch,” March 12, 2024
- The New York Times, “Trump Fraud Trial Penalty Will Exceed $450 Million,” Feb. 16, 2024
- The New York Times, “Trump Prevails in Supreme Court Challenge to His Eligibility,” March 4, 2024
- ABC News, “Supreme Court rejects Trump claim of ‘absolute immunity’ from grand jury subpoena for tax returns,” July 10, 2020
- Rappler, “What can the Philippines learn from how AI was used in Indonesia’s 2024 election?,” Feb. 27, 2024
- New York Attorney General’s office, “Attorney General James Sues Donald Trump for Years of Financial Fraud,” Sept. 21, 2022