AI Made Friendly HERE

AI labelling rules near finalisation as government moves to tackle deepfakes

The government plans mandatory labelling of AI-generated content to curb deepfakes and misinformation. IT Secretary says rules are in final stages.

New Delhi:

The government’s plan to mandate the labelling of AI-generated content will empower users to scrutinise such material and ensure that synthetic output does not masquerade as truth, IT Secretary S Krishnan said on Tuesday. He added that the proposed rules are now in the final stages of formulation.

Speaking at an industry event, Krishnan said clear labelling would help people identify content generated by artificial intelligence and better evaluate its authenticity.

AI labelling rules to cover tech firms and social media platforms

The proposed AI labelling framework will place obligations on two major groups:

  1. AI tool providers such as ChatGPT, Grok, and Gemini
  2. Social media platforms

According to Krishnan, these entities are largely big technology firms that already possess the technical expertise and infrastructure required to implement AI-content labelling mechanisms.

“Labelling something as AI-generated content offers people the opportunity to examine it. You know that it is AI-generated and that it is not masquerading as the truth,” Krishnan said.

He was speaking at the event “Building Safe Spaces for AI Impact: Regulatory and Private Sandboxes”, organised by industry body Nasscom.

Draft rules under legal vetting, finalisation soon

Krishnan confirmed that the draft rules are currently undergoing legal vetting and are close to being finalised.

In October, the government had proposed amendments to the IT Rules that would make it mandatory to clearly label AI-generated content. The proposal also aimed to increase the accountability of large digital platforms such as Facebook and YouTube for verifying and flagging synthetic or manipulated information.

Government concerned over deepfakes and misinformation

The IT Ministry has highlighted growing concerns over deepfake audio, videos, and other synthetic media going viral on social platforms. According to the ministry, generative AI has the potential to create “convincing falsehoods” that can be weaponised to spread misinformation, damage reputations, manipulate elections, or commit financial fraud.

The proposed amendments to the IT Rules aim to provide a clear legal framework for labelling, traceability, and accountability related to synthetically generated or modified content.

Mandatory markers and metadata for AI content

As part of the draft amendments, the government invited stakeholder comments on rules mandating clear labelling, visibility, and metadata embedding for AI-generated or altered information.

The proposal includes requirements for companies to label AI-generated content with prominent markers or identifiers. These would cover at least 10 per cent of the visual display or the initial 10 per cent of the duration of an audio clip, making it easier for users to distinguish synthetic content from authentic media.

No immediate plan for separate AI law, says IT Secretary

Addressing questions on whether India needs a dedicated AI Act, Krishnan said the government is not ruling out the possibility but does not believe the time is right yet.

“We are not having it tomorrow, or in the next session of Parliament, but in future we may need an Act,” he said.

“For now, we believe the existing laws and Acts are adequate to address current requirements, but we do not rule out the possibility of enacting new legislation if future concerns arise”.

ALSO READ: Sony exits Bravia TV, home entertainment business, sells majority stake to form new global venture

Originally Appeared Here

You May Also Like

About the Author:

Early Bird