AI Made Friendly HERE

India’s new AI rules giving platforms just three hours to remove flagged content raise fears of over-censorship

A day after notifying rules governing artificial intelligence (AI)-generated content, the government released detailed FAQs clarifying both intent and implementation. While much of the public debate has focused on labelling requirements and deepfake restrictions, experts say the most consequential shift lies elsewhere: sharply compressed enforcement timelines.

Under the revised framework, social media intermediaries must act on government takedown orders within three hours — down from 36 hours earlier. Requests involving non-consensual nude imagery must be addressed within two hours, compared with 24 hours previously. Impersonation-related content must now be removed within 36 hours, shortened from 72 hours.

Digital policy experts warn that these accelerated deadlines could fundamentally alter India’s intermediary liability regime — creating compliance pressures that incentivise over-removal of content and weaken due process safeguards.

The Three-Hour Compliance Challenge

The government has justified the tighter timelines on the grounds that viral synthetic content — particularly deepfakes and non-consensual imagery — can cause irreversible harm within hours. Delayed takedowns, it argues, undermine victim protection and electoral integrity.

However, industry observers question whether platforms can realistically undertake meaningful legal scrutiny within such narrow windows.

Nikhil Pahwa, Founder of MediaNama, said that while the urgency around deepfake harms is understandable, operational feasibility remains a serious concern.

“It’s a difficult ask,” Pahwa said, noting that compliance orders could arrive at any time. “What if you receive an order at 2 am? You have three hours to comply. In other cases, two hours. How do you take a considered legal view in that time?”

Many global platforms operate centralised moderation and legal review hubs outside India. Meeting the new requirements may necessitate round-the-clock compliance infrastructure within the country, raising costs and operational complexity.

More critically, experts argue that the compressed window leaves little room to evaluate whether a takedown directive is lawful or proportionate.

“If there is no time to assess whether an order is valid, platforms may simply comply to avoid regulatory risk,” Pahwa added, warning that large volumes of simultaneous notices could further strain systems.

Safe Harbour Under Pressure

At the heart of the debate is Safe Harbour — the legal immunity granted to intermediaries for third-party content, conditional on due diligence compliance.

Shortened timelines effectively narrow the space for platforms to contest or question government directives. Failure to act within the prescribed window could jeopardise Safe Harbour protections.

Analysts caution that this incentive structure may encourage what they describe as “over-compliance” — removing content pre-emptively rather than risking liability.

“If the risk of losing Safe Harbour outweighs the cost of taking content down, platforms will err on the side of removal,” Pahwa suggested.

Importantly, the expedited timelines are not confined strictly to AI-generated material in all cases. In several categories, the accelerated compliance obligations extend more broadly — raising concerns that measures designed to address deepfakes could spill over into wider content moderation practices.

Due Process and Transparency Concerns

Apar Gupta, Founder-Director of the Internet Freedom Foundation, argued that the rules expand an already extensive content regulation framework through executive notification rather than parliamentary debate.

“The worrying aspect has been opacity,” Gupta said, pointing to instances where affected users are not provided clear notice, an opportunity to respond, or a reasoned explanation when content is removed.

He also highlighted the operationalisation of the government’s Sahyog portal, which enables multiple state-level authorities — not only the central government — to issue takedown notices. The portal is currently under legal challenge in the Karnataka and Bombay High Courts.

With reduced compliance timelines now in force, Gupta warned that high volumes of notices from different authorities could push platforms toward rapid removals without adequate procedural safeguards.

“There is a real risk that measures intended to tackle deepfake harms could result in broader censorship beyond that objective,” he said.

Targeting Deepfakes — Not Routine AI Use

The government maintains that the rules are narrowly tailored to curb deceptive synthetic media — including sexually explicit deepfakes, impersonation videos and fabricated government documents.

The FAQs clarify that AI tools cannot generate sexually explicit deepfakes, particularly involving children, or create synthetic content that violates bodily privacy. Fabricated government IDs, mark sheets, salary slips, appointment letters and bank statements are explicitly prohibited.

Routine AI-enabled functions — such as image enhancement, translation, compression or automated formatting — are excluded from regulatory targeting.

Additionally, all lawfully generated synthetic content must carry a prominent “synthetically generated” label. AI-generated audio must include a prefix stating it was created using AI. Social media platforms are required to verify whether user-uploaded content is AI-generated and ensure appropriate labelling.

Business and Governance Implications

Beyond free speech concerns, the rules carry significant operational and economic implications for digital platforms.

Companies may need to establish 24×7 domestic compliance cells, expand legal review teams and invest in AI detection systems capable of identifying synthetic content at scale. While large incumbents may absorb these costs, smaller platforms and startups could face higher entry barriers.

Rohit Kumar, Founding Partner at The Quantum Hub, has previously emphasised the importance of calibrated implementation when altering intermediary obligations in rapidly evolving technology domains. Regulatory clarity, he argues, is essential to balancing innovation with accountability.

The broader question confronting India’s digital ecosystem is whether accelerated enforcement enhances user protection without eroding procedural safeguards — or whether it recalibrates the balance decisively toward enforcement over deliberation.

Also Read | Elon Musk to Nandan Nilekani and Sam Altman, what top minds say about AI

As AI-generated content grows more sophisticated and pervasive, the three-hour rule marks one of the most significant shifts in India’s intermediary liability framework. Its long-term impact — on free expression, platform governance and business economics — will likely be tested both in courtrooms and in real-world implementation.

For now, digital rights advocates caution that speed, while critical in curbing harm, must not come at the cost of systemic over-correction.

Watch accompanying video for entire discussion.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird