
Fierce competition among some of the world’s biggest tech companies has led to a profusion of AI tools that can generate humanlike prose and uncannily realistic images, audio, and video. While those companies promise productivity gains and an AI-powered creativity revolution, fears have also started to swirl around the possibility of an internet that’s so thoroughly muddled by AI-generated content and misinformation that it’s impossible to tell the real from the fake.
Many leading AI developers have, in response, ramped up their efforts to promote AI transparency and detectability.
Also: Google Flow is a new AI video generator meant for filmmakers – how to try it today
Most recently, Google announced the launch of its SynthID Detector, a platform that can quickly spot AI-generated content created by one of the company’s generative models: Gemini, Imagen, Lyria, and Veo.
How SynthID Detector works
Originally released in 2023, SynthID is a technology that embeds invisible watermarks — a kind of digital fingerprint that can be detected by machines but not by the human eye — into AI-generated images. SynthID watermarks are designed to remain in place even when images have been cropped, filtered, or undergone some other kind of modification.
Since then, AI models have grown increasingly multimodal, or able to interact with multiple forms of content. Gemini, for example — Google’s chatbot that was launched in response to the viral success of ChatGPT — can respond to a text prompt by generating an image, or to an uploaded image with a text response. Developments in AI-generated audio and video, meanwhile, have also been moving along quickly.
Also: I let Google’s Jules AI agent into my code repo and it did four hours of work in an instant
Google has therefore expanded SynthID so that it can watermark not only AI-generated images, but also text, audio, and video.
SynthID Detector is an online portal that makes it easy to detect a SynthID watermark in media generated by one of Google’s generative AI tools. Users simply upload a file, and Google’s detection system scans it for a watermark. It reports back that a watermark was detected, not detected, or that the results are inconclusive.
Images generated by Gemini automatically come with embedded watermarks. SynthID Detector is designed to add another layer of transparency, however, by making it easier to verify the presence of those invisible marks.
Google announced Tuesday that it’s rolling out SynthID Detector initially to a group of early testers before a broader public launch. It’s also offering access to a waitlist for journalists, media professionals, and AI researchers.
Also: The best AI chatbots: ChatGPT, Copilot, and notable alternatives
“To continue to inform and empower people engaging with AI-generated content, we believe it’s vital to continue collaborating with the AI community and broaden access to transparency tools,” Pushmeet Kohli, vice president of Science and Strategic Initiatives at Google DeepMind, wrote in a company blog post published Tuesday.
The company also announced on Tuesday a new partnership with GetReal Security, a leading cybersecurity firm specializing in detecting digital misinformation.
The push for transparency
Facing pressure from regulators in Europe and the US who are calling for greater accountability in the AI sector, as well as a growing chorus of public voices warning of the dangers of AI deepfakes, big tech companies are no longer only racing to build the most powerful models — they’re also competing to make it easier to identify AI-generated media.
Tools like Google’s SynthID Detector, in other words, are part of an ongoing push among AI developers to bolster their reputations as leaders in safety and transparency.
Also: The top 20 AI tools of 2025 – and the #1 thing to remember when you use them
While Europe’s Global Data Protection Regulation (GDPR) and AI Act require AI companies to have some transparency mechanisms in place, no comprehensive federal regulation of the industry currently exists in the US. Companies operating here have therefore largely had to take it on themselves to introduce AI detectability measures.
For those building generative models, like Google, those measures have often come in the form of watermark technologies. Social media platforms like TikTok and Instagram, meanwhile, have started to require labels and other disclosures for AI-generated content shared on their platforms.
Get the morning’s top stories in your inbox each day with our Tech Today newsletter.