India Tightens Grip on AI Deepfakes With 3-Hour Takedown Rule and Mandatory Labelling
- MGMMTeam

- 2 hours ago
- 4 min read
The Indian government has unveiled a major overhaul of its digital regulations to address the fast-growing threat of artificial intelligence-generated deepfakes and synthetic content. Through amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, the government has introduced strict timelines and transparency requirements aimed at curbing the misuse of generative AI technologies.
The amended rules, notified on February 10 and set to come into force from February 20, represent one of India’s strongest regulatory responses to AI-driven misinformation, impersonation, and digital fraud. By formally bringing “synthetically generated information” under the IT regulatory framework, the government has signaled its intent to close legal gaps and strengthen accountability across the digital ecosystem.

Three-Hour Deadline for Removing Objectionable Content
One of the most significant changes under the new framework is the drastic reduction in the time allowed for platforms to remove unlawful or objectionable content. Social media platforms and digital intermediaries will now be required to take down such content within three hours of receiving a valid complaint or lawful government order.
This marks a sharp departure from earlier timelines, which allowed platforms up to 36 hours in many cases. The government has justified the tighter deadline by pointing to the speed at which deepfakes and misleading AI-generated content can go viral, often causing reputational damage, financial loss, or public disorder before corrective action can be taken.
Officials have said the new timelines are designed to ensure swift intervention, especially in cases involving impersonation, misinformation, and content that poses risks to individuals or public safety.
Mandatory Labelling of AI-Generated and Altered Content
A central pillar of the updated rules is the mandatory labelling of all AI-generated or AI-altered content. Platforms must ensure that any image, video, audio, or other media created or substantially modified using artificial intelligence is clearly and prominently identified as synthetic.
In addition to visible labels, intermediaries are required to embed persistent technical markers, including metadata and unique identifiers, to help trace the origin of AI-generated content. These identifiers are meant to remain intact even when content is reshared, ensuring that synthetic media can be identified across platforms and over time.
Users will also be required to declare whether the content they upload has been created or altered using AI tools. Platforms are expected to use technical systems to verify these declarations, strengthening oversight and reducing the scope for misrepresentation.
Deepfakes Recognised as Serious Digital Violations
The amendments make it clear that AI-generated content used for unlawful purposes will be treated on par with other forms of illegal material. This includes deepfakes linked to child sexual abuse material, non-consensual intimate imagery, impersonation, false electronic records, obscene content, and material connected to explosives or other criminal activities.
By explicitly classifying harmful deepfakes within these legal categories, the government has moved to eliminate ambiguity around enforcement. The step reflects growing global concern over how generative AI tools are being misused to manipulate identities, fabricate events, and spread false narratives.
Compliance Pressure on Global Technology Platforms
The new rules are expected to place additional compliance burdens on major global technology companies operating in India, including Meta, Google, and X. India is one of the world’s largest digital markets, and platforms already process a high volume of takedown requests and regulatory notices.
With the introduction of a three-hour deadline and enhanced labelling requirements, companies will need to further strengthen their content moderation systems and local compliance infrastructure. Media and regulatory reports indicate that the volume and urgency of takedown requests are likely to rise, increasing operational pressure on intermediaries.
Concerns Over Due Process and Over-Censorship
While the government has framed the amendments as necessary to protect users and preserve digital trust, digital rights groups have expressed concerns over the potential for over-censorship. Critics argue that extremely short takedown windows may limit platforms’ ability to properly assess the legality of content, particularly in complex or politically sensitive cases.
There are also concerns that reduced timelines could weaken due process and increase the risk of content being removed pre-emptively to avoid liability. The government, however, has maintained that the urgency is essential to prevent irreversible harm caused by rapidly spreading deepfakes and AI-driven misinformation.
Safe Harbour Protection for Platforms That Comply
Despite tightening enforcement, the government has reaffirmed that platforms that comply with the new due-diligence requirements will continue to enjoy safe harbour protection under Section 79 of the IT Act. This means intermediaries will not be held liable for user-generated content as long as they act promptly and in good faith to remove unlawful material and meet regulatory obligations.
Officials have emphasized that the objective is not to stifle innovation, but to ensure that the rapid expansion of AI technologies is matched by strong safeguards, transparency, and accountability.
The MGMM Outlook
India’s latest amendments to the IT Rules mark a strong and necessary intervention to counter the growing threat of AI-driven deepfakes, impersonation, and synthetic misinformation. By bringing “synthetically generated information” directly under the regulatory framework and enforcing a three-hour takedown window, the government has sent a clear signal that rapid viral harm will no longer be tolerated. The shorter timelines reflect the reality that AI-generated content can spread at unprecedented speed, causing reputational damage, financial fraud, and public confusion before traditional moderation processes can respond.
The mandatory labelling and technical identification of AI-generated or altered content further strengthens transparency and accountability across digital platforms. Requiring visible labels, persistent metadata, and user declarations creates a traceable ecosystem that makes it harder for malicious actors to misuse AI tools for deception. While concerns around due process and over-censorship remain part of the broader debate, the framework reinforces the principle that technological innovation must be matched with strong safeguards. The emphasis on compliance-linked safe harbour also ensures that platforms remain protected when they act responsibly, encouraging cooperation while prioritising digital safety and public trust.
(Sources: India Today, Economic Times, Times of India)




Comments