
Hoang Trung, an accountant in HCMC, says he initially found AI videos entertaining and even educational.
“But after a while, the platforms kept recommending more and more of them, and I became fed up. Many are shallow and crude, even make offensive jokes about gender while masquerading as humor.”
Tran Long, a programmer in Da Nang, recalls receiving a call from his father in their hometown about a sensational YouTube video.
“He sent it to me and asked if it was real, wondering why no major news outlets were reporting on it.
“The AI-generated content looked so convincing that people unfamiliar with technology could easily mistake them for real.”
He advised his father to verify such content with him or someone else trustworthy, particularly to safeguard himself from potential scams and illegal stuff.
A video created with Google Veo 3 has garnered thousands of likes and comments on Facebook Reels. Photo by VnExpress/Bao Lam |
A Malaysian couple became the talk of social media last week after driving 300 kilometers from Kuala Lumpur to a tourist spot featured in a TikTok video, only to discover it was AI-generated and did not exist.
The rise of tools that generate videos from text and images, such as OpenAI’s Sora, Google’s Veo, Midjourney, and Runway, has fueled a surge in AI-created videos and driven a trend of “faceless” content across social media. While there are no official statistics, many users on Facebook, TikTok, Instagram, and YouTube report encountering at least one AI video daily.
Dr. Le Duy Tan, an IT lecturer at the International University under the Vietnam National University HCMC, says the process of video creation has changed. What once required filming, editing and sometimes hiring professionals can now be done by typing a few prompts and pressing a button. “It feels like having a lightning-fast film production team working round the clock for free.”
He adds that social media algorithms, including TikTok’s, Facebook and Instagram’s Reels and YouTube Shorts, have amplified the spread of such content. However, many are poorly made, giving rise to what is called “AI slop” – low-effort, mass-produced content that is shallow and sometimes offensive and often spreads misinformation. “Creators need to be creative and use detailed prompts,” Tan says. “Otherwise, vague keywords can produce superficial content that clutters the internet with unrefined material or AI-generated trash.”
Tech expert Nguyen Ngoc Duy Luan agrees, noting that producing a high-quality AI video requires considerable time and effort. “Even advanced models like Sora or Veo are not flawless. They often produce errors, such as awkward hand movements or objects inexplicably floating in the frame.”
Tan notes that excessive exposure to unverified AI videos may leave users more susceptible to false claims, potentially influencing their decisions. He warns that such content could undermine people’s trust in information, including from the mainstream media.
Major platforms have introduced policies to regulate AI content. Meta requires AI-generated or edited material to be labeled on Facebook, Instagram and Threads. YouTube mandates disclosure if AI is used in videos. TikTok prohibits harmful AI content, including deepfakes that promote violence, discrimination or misinformation.
Luan highlights the risks of AI videos being misused for scams, such as promoting a product but delivering counterfeit goods. He adds that such content often targets young audiences and presents unrealistic depictions of the world, which can mislead viewers unfamiliar with how it is created.
He advises viewers to remain cautious.
“Always ask yourself: Where did this video come from? Is the channel trustworthy? Has the information been verified by reputable media or organizations?
“If the video comes from an obscure or newly created channel without a clear background, it is likely unreliable.”