-
Amendments to IT Rules, 2021 introduce heightened due diligence obligations for all intermediaries in relation to ‘synthetically generated content’, including mandatory labelling, reporting requirements etc.
-
The Amendments also shorten compliance timelines, including for informing users about consequences of generating unlawful SGI through platform policies, content removal, responding to grievances etc.
-
Distinction between permitted and prohibited SGI has been created. Prohibited SGI must not be generated using the intermediaries’ computer resource, and permitted SGI must be prominently labelled.
-
SSMIs to require user to declare SGI, verify the accuracy of such declaration and ensure proper labelling of such SGI content.
The Ministry of Electronics and Information Technology (“MeitY”) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (“Amendment Rules”)1 on February 10, 2026, which will come into force on February 20, 2026. The Amendment Rules expand the scope of the existing Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules 2021”) to specifically regulate synthetically generated information (“SGI”) and impose enhanced due diligence and content moderation obligations on intermediaries.
MeitY subsequently published a set of Frequently Asked Questions to address general queries received from stakeholders on the due diligence obligations on intermediaries and associated concerns (“FAQs”)2. As per the FAQs, the Amendment Rules have been introduced in response to rapid advances in artificial intelligence and machine learning technologies, which have made it significantly easier to create and disseminate highly realistic synthetic audio-visual content, including deepfakes, that may mislead or deceive users.
I. Background
In October 2025, MeitY had released draft amendments to the IT Rules 2021 proposing a regulatory framework for SGI and inviting comments from stakeholders as part of the public consultation process (“Draft Rules”)3. We had submitted detailed representations, and it appears that several suggestions we made, alongside other stakeholders, have been considered and reflected in the final notified Amendment Rules.
Our analysis of the proposed amendments can be accessed here and a copy of our submissions to the Government can be accessed here.
II. Scope and Definition of SGI:
The Amendment Rules define SGI as “audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event”4 (emphasis supplied).
SGI refers to audio, images, photographs, graphics, videos, moving visual recordings, sounds recordings or other such content that and would include deepfakes, AI-generated or altered images and videos, voice cloning and other forms of highly realistic audio-visual content.5 Significantly, the FAQs clarify that pure text or written content is not SGI6. Further, the Amendment Rules specifically exclude benign and good-faith uses of technology from the scope of SGI. Particularly:
(i) Routine or good faith editing or technical enhancements, such as formatting, colour correction, noise reduction, transcription or compression, that do not materially alter or misrepresent the substance or context of the underlying content.7 For example, edits such as increasing brightness or contrast on a photo or transcribing an audio/video interview into text.8
(ii) Good-faith creation or preparation of documents, presentations, PDFs, educational or training materials, research outputs and illustrative or template-based, provided it does not generate a false document or false electronic record.9 Creation of fake certificates, fake official letters, forged IDs or fabricated electronic records will not fall within this exception and will be treated as unlawful SGI.10
(iii) Use of computer resources solely to “improve accessibility, clarity, quality, translation, description, searchability or discoverability” without altering the material part of the underlying audio-visual content.11 For example, adding subtitles or closed captions to videos or audio description for visually impaired users12.
The scope of SGI in the Amendment Rules has been appropriately narrowed from the Draft Rules. Pure text or written content does not constitute SGI anymore, thus excluding vast amounts of innocuous automated, AI assisted written content. Intermediaries will need to develop internal frameworks to determine whether (i) content meets the indistinguishable from reality threshold and (ii) an alteration is “material” or merely technical. Further, while clear cut parodies may not constitute SGI, satire, adaptations, re-enactments and embellished content may present a grey area as an assessment of whether such content ‘is indistinguishable from reality’ may vary from person to person.
‘Information’ under the IT Rules, 2021 to Include SGI
The Amendment Rules clarify that all existing references to “information” under the IT Rules, 2021, when used to commit an unlawful act, will now also include SGI. Obligations on intermediaries to act on information now explicitly extend to SGI13. Intermediaries must now handle SGI used in unlawful acts the same way they handle any other content hosted on their platform.
III. Due Diligence Obligations for Intermediaries
The Amendment Rules strengthen the overall due diligence framework under the IT Rules, 2021 by imposing enhanced compliance obligations on intermediaries.
A. For All Intermediaries
Intermediaries are now required to notify14 users at least once every three months of the consequences of violating platform terms or applicable law15. The notification should be made in a simple and effective manner through the intermediaries’ rules and regulations, privacy policy or user agreement.16
B. For Intermediaries Enabling or Facilitating the Generation, Publication or Sharing of SGI:
Intermediaries which offer a platform to enable the creation of or publication, transmission or dissemination of SGI have further obligations to inform users that creating, publishing or sharing SGI which : (i) contains child sexual exploitative and abuse material (“CSEAM”), non-consensual intimate imagery (“NCII”) and sexually explicit or privacy-invasive content; (ii) contains forged documents or false electronic records; (iii) relates to the preparation or procurement of explosives, arms or ammunition; and (iv) falsely depicts a natural person or real-world event through deceptive misrepresentation of identity, voice, conduct, statements or the event itself (together, “Prohibited SGI”) may attract consequences,17 which include:
(i) The creation of Prohibited SGI attracts penalties and punishment attached under applicable laws.18
(ii) Contraventions of the IT Rules may lead to:
-
Removal of Prohibited SGI19;
-
Suspension or termination of user account20;
-
Identification of such user and disclosure of their identity to the victim21;
-
Mandatory reporting responsibility of the intermediary22 under applicable laws.
These intermediaries are required to take expeditious and appropriate action upon becoming aware of such Prohibited SGI on its own or by receiving ‘actual knowledge’23.24 Such intermediaries are also required to deploy reasonable and appropriate technical measures, including automated tools, to prevent users from generating or sharing Prohibited SGI.25
Prohibited SGI may be in audio and visual formats such as deepfakes, voice clones, synthetic tutorials, forged identity documents, fabricated official records, impersonation content and synthetic “news” footage presented as authentic. It also includes multi-modal deception where video and audio are paired with misleading text captions to create and mislead users.26
The Draft Rules applied only to intermediaries that enabled the creation, generation, modification, or alteration of SGI. The final Amendment Rules expand this scope to also cover intermediaries that merely enable the publication, transmission, sharing, or dissemination of SGI. This broader wording will potentially require even intermediaries such as messaging platforms, listing platforms, cloud service and cloud storage platforms through which users may transmit or distribute SGI to comply with the requirements under the Amendment Rules.
IV. Labelling of SGI
Where SGI does not fall within Prohibited SGI, intermediaries must ensure that content is prominently labelled in a manner that is easily noticeable and adequately perceivable. This can be through visible on-screen disclosures for visual content, or a prominently prefixed audio disclosure for audio content.27 The Draft Rules proposed a fixed requirement that such labels occupy at least 10% of the content. However, the final Amendment Rules have done away with this fixed threshold and instead give intermediaries flexibility in deciding the size and format of the label, provided that it is prominent and enables users to immediately identify the content as SGI.
Additionally, where technically feasible, intermediaries must embed permanent metadata or other appropriate technical markers (such as a unique identifier) into SGI. These markers should make it possible to identify the intermediary’s computer resources used to create or modify the content. Intermediaries are expressly prohibited from enabling the removal, alteration or suppression of these labels or embedded markers on their platforms.
V. Additional Due Diligence Obligations on Significant Social Media Intermediaries
Significant Social Media Intermediaries (“SSMI”)28 which enable display or publishing of any information on its platform must ensure the following before allowing users to display upload or publish information on its platform29:
(i) They must require users to declare whether the information they’re uploading is SGI (for example, ‘AI Generated: Yes/ No”).30
(ii) They must verify the accuracy of such user declarations with regards to “nature, format and source” of the content, using appropriate technical measures or other suitable mechanisms deployed by it, including verification using metadata and signals.31
(iii) Upon confirmation that the content is SGI, SSMIs must clearly and prominently label it indicating that the content is SGI.
Platforms are likely to update their terms of use and related policies to expressly incorporate user obligations to accurately declare SGI content. Especially given the requirement to periodically notify users of such policies every three months, platforms may treat users who make inaccurate or false SGI declarations as being in breach of platform terms, potentially resulting in content removal, account suspension or termination.
An SSMI will only be liable to the extent that it takes reasonable and proportionate technical measures to verify the correctness of user declaration and ensure that no SGI is published without a label.32 In situations where it is established that an SSMI “knowingly” permitted uploading of or failed to act upon Prohibited SGI or unlabelled SGI, it will be deemed to have failed to exercise due diligence.33
The Amendment Rules do not clearly articulate the threshold for “knowledge” in this context, nor do they clarify whether an inadvertent failure to detect SGI by itself would jeopardize safe harbour protection. Automated detection tools may generate both false positives and false negatives. Manual review at scale is operationally burdensome. In the absence of clearer guidance, intermediaries may adopt a risk-averse approach, including over-labelling content as SGI to mitigate liability exposure, resulting in user desensitization and undermining the regulatory objective of the amendments.
VI. Mandatory Reporting Obligations
Intermediaries must inform users that, if their conduct on the platform amounts to an offence under applicable laws and such offences require mandatory reporting, the matter may be reported to the appropriate authorities. Platforms which enable the creation of SGI must specifically clarify that violations involving such content may also trigger mandatory reporting, where required by law. The Amendment Rules reference the Bharatiya Nagarik Suraksha Sanhita, 2023, (“BNSS”) and Protection of Children from Sexual Offences Act, 2012 (“POCSO Act”) which contain mandatory reporting for certain reportable offenses.34
On a closer reading, such mandatory reporting obligations always existed on the intermediary platforms and were also recently brought up in MeitY’s notice to Grok8. However, the Amendment Rules now explicitly require intermediaries, whether they offer tools to generate SGI or not, to inform their users about such mandatory obligations on the intermediaries. For clarity, the Amendment Rules do not create any obligation on the users to report, although such reporting obligations on the users separately exist under the same provisions in law.9
VII. Safe Harbour
The Amendment Rules clarify that an intermediary removing or disabling access to any information, including SGI, in compliance with the IT Rules, 2021 will not be treated as a breach of the conditions for safe harbour under Section 79(2)(a) or (b) of the IT Act.35
As a compliance safeguard, SSMIs should consider documenting and disclosing, through their periodic compliance reports, the measures implemented to identify and address SGI. Transparent reporting may assist in demonstrating good-faith efforts and reasonable due diligence, even where certain instances of SGI evade detection.
VIII. Revised Timelines
The Amendment Rules have substantially tightened the timelines for compliance with the provisions of the IT Rules, 2021, including those related to content takedown and user grievances. The revised timelines are as follows:
|
Compliance
|
Old Timeline
|
Revised Timeline
|
|
Removal of content upon receiving “actual knowledge”.36
|
36 hours from receipt of such “actual knowledge”.
|
3 hours from receipt of such “actual knowledge”.
|
|
Removal of content which appears to contain sexually explicit material including images which exposes private parts of the individuals, show full or partial nudity etc. including artificially morphed images, upon receiving a complaint made by the individual.37
|
24 hours from the receipt of complaint made by the individual.
|
2 hours from the receipt of complaint made by the individual.
|
|
Resolving general user grievances.38
|
15 days from the receipt of complaint.
|
7 days from the receipt of complaint.
|
|
Resolving user grievances containing requests for removal of content relating to Rule 3(1)(b) [other than those falling within sub-clauses (i), (iv) and (xi)]39.40
|
72 hours from receipt of complaint.
|
36 hours from receipt of complaint.
|
|
Notifying users through the intermediaries’ rules and regulations, privacy policy or user agreement, of the consequences of non-compliance with provisions of the IT Rules, 2021.41
|
Once every 1 year.
|
Once every 3 months.
|
This tightened compliance will necessitate a prompt review of internal compliance processes by intermediaries, including adjustments to notice-and-takedown and content moderation systems. Given that failure to adhere to prescribed timelines may expose intermediaries to the potential loss of safe harbour protections under the IT Act, platforms may adopt a more cautious approach to content moderation to mitigate regulatory risk, where limited time to independently assess the validity of the request may lead to broad based takedowns. Some industry bodies have sought stakeholder consultations with MeitY on the accelerated compliance requirements.
IX. Conclusion
The Amendment Rules are a good step towards addressing the online harm caused by SGI, while continuing to permit its responsible use in support of innovation, accessibility and economic growth. It aligns with the Government of India’s broad sector-specific approach to regulating AI through targeted guardrails rather than overarching restrictions. However, the Amendment Rules introduce materially enhanced compliance obligations for intermediaries, which will require them to undertake technical adjustments, update policies, train moderation teams and deploy detection tools appropriately. In this context, the prescribed 10-day implementation timeline may present practical challenges for platforms seeking to operationalize the changes in a structured and legally compliant manner.
|