
The government has proposed new rules to combat the rising threat of deepfakes and misinformation. Social media platforms will face greater responsibility for verifying and flagging false information to protect users from harm.
New Delhi:
The government, on Wednesday, proposed amendments to the IT rules to combat the rising threat of deepfakes and misinformation. New regulations require that any content created by artificial intelligence (AI) be clearly labeled. This means that big platforms like Facebook and YouTube will have to take more responsibility for checking and flagging false information to protect users from harm.
The IT Ministry pointed out that fake audio, videos, and other types of false media have been spreading quickly on social media, showing how AI can produce very realistic but misleading content. This content can be “weaponised” to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.
How these rules enforce labelling
The proposed amendments provide a clear legal basis for the labeling, traceability, and accountability of synthetically generated information. Apart from clearly defining synthetically generated content, the draft mandates labeling, visibility, and metadata embedding to distinguish such content from authentic media. Comments from stakeholders on this draft have been sought by November 6, 2025.
These stricter rules would significantly increase the accountability of significant social media intermediaries (SSMIs—those with 5 million or more registered users). These platforms must now verify and flag synthetic information using reasonable and appropriate technical measures.
How users will identify AI generated media
Specifically, the draft rules mandate platforms to label AI-generated content with prominent markers and identifiers, covering a minimum of 10 per cent of the visual display or the initial 10 per cent of the duration of an audio clip. Platforms must also require a user declaration on whether uploaded information is synthetically generated, deploy technical measures to verify these declarations, and ensure the content is clearly labeled or accompanied by a notice. Furthermore, intermediaries are prohibited from modifying, suppressing, or removing these labels or identifiers.
“In Parliament as well as many forums, there have been demands that something be done about deepfakes, which are harming society…people using some prominent person’s image, which then affects their personal lives, and privacy,” said IT Minister Ashwini Vaishnaw. He added, “Steps we have taken aim to ensure that users get to know whether something is synthetic or real. It is important that users know what they are seeing”. The mandatory labeling and visibility will enable clear distinctions between synthetic and authentic content.
Once these rules are finalised, any compliance failure could mean the loss of the safe harbor clause currently enjoyed by large platforms.
Why the draft rules have been proposed
The IT Ministry stated that with the increasing availability of generative AI tools and the resulting proliferation of deepfakes, the potential for misuse—including spreading misinformation, manipulating elections, or impersonating individuals—has grown significantly. Accordingly, the draft amendments to the IT Rules, 2021, aim to strengthen the due diligence obligations for intermediaries, especially SSMIs, and for platforms that enable the creation or modification of synthetically generated content.
The draft introduces a new clause defining synthetically generated content as information that is artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that appears reasonably authentic or true.
Why these rules are important
A note by the IT Ministry highlighted that policymakers globally and in India are increasingly concerned about fabricated images, videos, and audio clips (deepfakes) that are often indistinguishable from real content. These are being blatantly used to produce non-consensual intimate or obscene imagery, mislead the public with fabricated political or news content, or commit fraud or impersonation for financial gain.
The latest move is significant, as India is among the top markets for global social media platforms like Facebook and WhatsApp. A senior Meta official previously said India is the largest market for Meta AI usage, and OpenAI CEO Sam Altman noted that India, currently their second-largest market, could soon become their largest globally.
Is it apply to AI generated media or AI generated media that is posted online
Asked if the rules would apply to content generated on platforms like OpenAI’s Sora or Google’s Gemini, sources clarified that the obligation is triggered when a video is posted for dissemination. The onus in such a case would be on the intermediaries displaying the media and the users hosting it on the platforms. Concerning messaging platforms like WhatsApp, sources said that once synthetic content is brought to their notice, they will be required to take steps to prevent its virality.
India has already witnessed an alarming rise in AI-generated deepfakes, prompting court interventions. Recent viral cases include misleading ads depicting Sadhguru’s fake arrest, which the Delhi High Court ordered Google to remove. Earlier this month, Aishwarya Rai Bachchan and Abhishek Bachchan sued YouTube and Google, seeking damages over alleged AI deepfake videos.
ALSO READ: BSNL Samman Plan launched: Affordable 365-day service for senior citizens with many benefits