
Summary
- TikTok’s Symphony Avatars feature allows businesses and brands to create fully customized ads using only generative AI.
- A CNN reporter found that a related Symphony Assistant feature had zero guardrails or safety measures in place, while the videos generated in this fashion weren’t watermarked either.
- TikTok has since resolved the issue, deeming it a technical error and reassuring users that harmful AI videos would never have made it to the platform due to its strict content policies.
Generative AI is in extensive use across the tech world, and TikTok is no exception. The platform first announced Symphony Avatars last month as part of its “Creative AI suite,” enabling businesses, brands, and creators to create fully customized ads using generative AI and the likenesses of paid actors (avatars). This feature was then rolled out earlier this week, but only for people with a TikTok Ads Manager account. However, it seems like this limitation was briefly not in place, with a CNN reporter gaining access to one of the Symphony AI tools using their personal account. This led to the reporter finding practically no guardrails or safeguards in this AI tool.
Related
Everything you need to know about the US TikTok ban
Inside the US’s years-long effort to ban everybody’s favorite short-form video platform
The tool in question, known as Symphony Assistant, was spotted by CNN tech reporter Jon Sarlin, who reportedly managed to access it with their standard TikTok account. But much to Sarlin’s surprise, this generative AI feature could be used to create a convincing-looking video of practically any topic using one of the many actors. All the reporter had to do was select an avatar and enter a script of their choice, as shown below.
Sarlin went on to reveal some examples of the videos created using Symphony Assistant on X/Twitter, including having the tool fully recite Osama bin Laden’s “Letter to America” (via The Verge). To make matters worse, none of these videos had a watermark to indicate they were AI-generated, meaning such videos could be taken at face value by unsuspecting TikTok users if they were to be published on the platform.
TikTok says it has remedied the issue
When CNN reached out to TikTok about these videos, a company spokesperson called it “a technical error,” while adding that it has since been remedied. The spokesperson also made it clear that such videos would never have gone up on the platform due to its strict content policies. Here’s TikTok’s statement to CNN in full:
“A technical error, which has now been resolved, allowed an extremely small number of users to create content using an internal testing version of the tool for a few days. If CNN had attempted to upload the harmful content it created, this content would have been rejected for violating our policies. TikTok is an industry leader in responsible AIGC creation, and we will continue to test and build in the safety mitigations we apply to all TikTok products before public launch.”
By the looks of it, TikTok accidentally pushed this “internal testing version” of the AI tool to all of its users rather than a select few. Fortunately, CNN managed to flag the issue in time, leading TikTok to then hit the pause button. The ByteDance-owned company didn’t say if Symphony Assistant would be back anytime soon, though the company’s statement makes it clear that it won’t stop testing new AI features.
This isn’t TikTok’s first foray into AI, and likely won’t be its last. But given that it is among the most used social media apps in the US, regulators probably won’t be overly pleased about this episode, especially considering the ongoing efforts to ban the app.