NISHITH.TV
  • Mumbai
  • Silicon Valley
  • Bengaluru
  • Singapore
  • Mumbai BKC
  • New Delhi
  • New York
  • GIFT City

Locations

  • Mumbai
  • Silicon Valley
  • Bengaluru
  • Singapore
  • Mumbai BKC
  • New Delhi
  • New York
  • GIFT City
  • Content
  • Home
  • ABOUT US
  • NDA in the Media
  • Areas of Service
  • Research and Articles
  • Opportunities
  • Contact
  • NDACloud
  • Client Access
  • Member Access
  • NDA Connect
    Events and Calendar
  • Proud Moments
    How we perform
  • Nishith TV
    Knowledge anywhere, anytime
  • Deal Corner
    See our recent deals
  • Deal Talk
    Transactional insights unlocked
  • NDA Hotline
    Up to date legal developments
  • M&A Lab
    Case studies in M&A

Research and Articles

HTML

  • Think Tanks
  • Research at NDA
  • Research Papers
  • Research Articles
  • Policy Papers
  • Hotline
  • Imaginarium Ali Gunjan (Global Research Campus)
  • Japan Desk ジャパンデスク

Hotline


  • Capital Markets Hotline
  • Climate Change Related Legal Issues
  • Companies Act Series
  • Competition Law Hotline
  • Corpsec Hotline
  • Court Corner
  • Cross Examination
  • Deal Destination
  • Debt Funding in India Series
  • Dispute Resolution Hotline
  • Education Sector Hotline
  • FEMA Hotline
  • Financial Service Update
  • Food & Beverages Hotline
  • Funds Hotline
  • Gaming Law Wrap
  • GIFT City Express
  • Green Hotline
  • HR Law Hotline
  • iCe Hotline
  • Insolvency and Bankruptcy Hotline
  • International Trade Hotlines
  • Investment Funds: Monthly Digest
  • IP Hotline
  • IP Lab
  • Legal Update
  • Let's Shape the Future of Law Together
  • Lit Corner
  • M&A Disputes Series
  • M&A Hotline
  • M&A Interactive
  • Media Hotline
  • New Publication
  • Other Hotline
  • Pharma & Healthcare Update
  • Press Release
  • Private Client Wrap
  • Private Debt Hotline
  • Private Equity Corner
  • Real Estate Update
  • Realty Check
  • Regulatory Digest
  • Regulatory Hotline
  • Renewable Corner
  • SEZ Hotline
  • Social Sector Hotline
  • Tax Hotline
  • Technology & Tax Series
  • Technology Law Analysis
  • Telecom Hotline
  • The Startups Series
  • White Collar and Investigations Practice
  • Yes, Governance Matters.
  • Japan Desk ジャパンデスク

Technology Law Analysis


AI-Generated Content and Combating Deepfakes: What India’s New Rules Mean for Global Platforms

February 18, 2026

  •  Amendments to IT Rules, 2021 introduce heightened due diligence obligations for all intermediaries in relation to ‘synthetically generated content’, including mandatory labelling, reporting requirements etc.

  • The Amendments also shorten compliance timelines, including for informing users about consequences of generating unlawful SGI through platform policies, content removal, responding to grievances etc.

  • Distinction between permitted and prohibited SGI has been created. Prohibited SGI must not be generated using the intermediaries’ computer resource, and permitted SGI must be prominently labelled.  

  • SSMIs to require user to declare SGI, verify the accuracy of such declaration and ensure proper labelling of such SGI content.


The Ministry of Electronics and Information Technology (“MeitY”) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (“Amendment Rules”)1 on February 10, 2026, which will come into force on February 20, 2026. The Amendment Rules expand the scope of the existing Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules 2021”) to specifically regulate synthetically generated information (“SGI”) and impose enhanced due diligence and content moderation obligations on intermediaries.

MeitY subsequently published a set of Frequently Asked Questions to address general queries received from stakeholders on the due diligence obligations on intermediaries and associated concerns (“FAQs”)2. As per the FAQs, the Amendment Rules have been introduced in response to rapid advances in artificial intelligence and machine learning technologies, which have made it significantly easier to create and disseminate highly realistic synthetic audio-visual content, including deepfakes, that may mislead or deceive users.

I. Background

In October 2025, MeitY had released draft amendments to the IT Rules 2021 proposing a regulatory framework for SGI and inviting comments from stakeholders as part of the public consultation process (“Draft Rules”)3. We had submitted detailed representations, and it appears that several suggestions we made, alongside other stakeholders, have been considered and reflected in the final notified Amendment Rules.

Our analysis of the proposed amendments can be accessed here and a copy of our submissions to the Government can be accessed here.

II. Scope and Definition of SGI:

The Amendment Rules define SGI as “audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event”4 (emphasis supplied).

SGI refers to audio, images, photographs, graphics, videos, moving visual recordings, sounds recordings or other such content that and would include deepfakes, AI-generated or altered images and videos, voice cloning and other forms of highly realistic audio-visual content.5 Significantly, the FAQs clarify that pure text or written content is not SGI6. Further, the Amendment Rules specifically exclude benign and good-faith uses of technology from the scope of SGI. Particularly:

(i) Routine or good faith editing or technical enhancements, such as formatting, colour correction, noise reduction, transcription or compression, that do not materially alter or misrepresent the substance or context of the underlying content.7 For example, edits such as increasing brightness or contrast on a photo or transcribing an audio/video interview into text.8

(ii)  Good-faith creation or preparation of documents, presentations, PDFs, educational or training materials, research outputs and illustrative or template-based, provided it does not generate a false document or false electronic record.9 Creation of fake certificates, fake official letters, forged IDs or fabricated electronic records will not fall within this exception and will be treated as unlawful SGI.10

(iii)  Use of computer resources solely to “improve accessibility, clarity, quality, translation, description, searchability or discoverability” without altering the material part of the underlying audio-visual content.11 For example, adding subtitles or closed captions to videos or audio description for visually impaired users12.  

The scope of SGI in the Amendment Rules has been appropriately narrowed from the Draft Rules. Pure text or written content does not constitute SGI anymore, thus excluding vast amounts of innocuous automated, AI assisted written content. Intermediaries will need to develop internal frameworks to determine whether (i) content meets the indistinguishable from reality threshold and (ii) an alteration is “material” or merely technical. Further, while clear cut parodies may not constitute SGI, satire, adaptations, re-enactments and embellished content may present a grey area as an assessment of whether such content ‘is indistinguishable from reality’ may vary from person to person.

‘Information’ under the IT Rules, 2021 to Include SGI

The Amendment Rules clarify that all existing references to “information” under the IT Rules, 2021, when used to commit an unlawful act, will now also include SGI. Obligations on intermediaries to act on information now explicitly extend to SGI13. Intermediaries must now handle SGI used in unlawful acts the same way they handle any other content hosted on their platform.

III. Due Diligence Obligations for Intermediaries

The Amendment Rules strengthen the overall due diligence framework under the IT Rules, 2021 by imposing enhanced compliance obligations on intermediaries.

A.  For All Intermediaries

Intermediaries are now required to notify14 users at least once every three months of the consequences of violating platform terms or applicable law15. The notification should be made in a simple and effective manner through the intermediaries’ rules and regulations, privacy policy or user agreement.16

B. For Intermediaries Enabling or Facilitating the Generation, Publication or Sharing of SGI:

Intermediaries which offer a platform  to enable the creation of or publication, transmission or dissemination of SGI have further obligations to inform users that creating, publishing or sharing SGI which : (i) contains child sexual exploitative and abuse material (“CSEAM”), non-consensual intimate imagery (“NCII”) and sexually explicit or privacy-invasive content; (ii) contains forged documents or false electronic records; (iii) relates to the preparation or procurement of explosives, arms or ammunition; and (iv) falsely depicts a natural person or real-world event through deceptive misrepresentation of identity, voice, conduct, statements or the event itself (together, “Prohibited SGI”) may attract consequences,17 which include:

(i)  The creation of Prohibited SGI attracts penalties and punishment attached under applicable laws.18

(ii) Contraventions of the IT Rules may lead to:

  • Removal of Prohibited SGI19;

  • Suspension or termination of user account20;

  • Identification of such user and disclosure of their identity to the victim21;

  • Mandatory reporting responsibility of the intermediary22 under applicable laws.

These intermediaries are required to take expeditious and appropriate action upon becoming aware of such Prohibited SGI on its own or by receiving ‘actual knowledge’23.24 Such intermediaries are also required to deploy reasonable and appropriate technical measures, including automated tools, to prevent users from generating or sharing Prohibited SGI.25

Prohibited SGI may be in audio and visual formats such as deepfakes, voice clones, synthetic tutorials, forged identity documents, fabricated official records, impersonation content and synthetic “news” footage presented as authentic. It also includes multi-modal deception where video and audio are paired with misleading text captions to create and mislead users.26

The Draft Rules applied only to intermediaries that enabled the creation, generation, modification, or alteration of SGI. The final Amendment Rules expand this scope to also cover intermediaries that merely enable the publication, transmission, sharing, or dissemination of SGI. This broader wording will potentially require even intermediaries such as messaging platforms, listing platforms, cloud service and cloud storage platforms through which users may transmit or distribute SGI to comply with the requirements under the Amendment Rules. 

IV. Labelling of SGI

Where SGI does not fall within Prohibited SGI, intermediaries must ensure that content is prominently labelled in a manner that is easily noticeable and adequately perceivable. This can be through visible on-screen disclosures for visual content, or a prominently prefixed audio disclosure for audio content.27 The Draft Rules proposed a fixed requirement that such labels occupy at least 10% of the content. However, the final Amendment Rules have done away with this fixed threshold and instead give intermediaries flexibility in deciding the size and format of the label, provided that it is prominent and enables users to immediately identify the content as SGI.

Additionally, where technically feasible, intermediaries must embed permanent metadata or other appropriate technical markers (such as a unique identifier) into SGI. These markers should make it possible to identify the intermediary’s computer resources used to create or modify the content. Intermediaries are expressly prohibited from enabling the removal, alteration or suppression of these labels or embedded markers on their platforms.

V. Additional Due Diligence Obligations on Significant Social Media Intermediaries 

Significant Social Media Intermediaries (“SSMI”)28 which enable display or publishing of any information on its platform must ensure the following before allowing users to display upload or publish information on its platform29:

(i)  They must require users to declare whether the information they’re uploading is SGI (for example, ‘AI Generated: Yes/ No”).30

(ii)  They must verify the accuracy of such user declarations with regards to “nature, format and source” of the content, using appropriate technical measures or other suitable mechanisms deployed by it, including verification using metadata and signals.31

(iii) Upon confirmation that the content is SGI, SSMIs must clearly and prominently label it indicating that the content is SGI.

Platforms are likely to update their terms of use and related policies to expressly incorporate user obligations to accurately declare SGI content. Especially given the requirement to periodically notify users of such policies every three months, platforms may treat users who make inaccurate or false SGI declarations as being in breach of platform terms, potentially resulting in content removal, account suspension or termination.

An SSMI will only be liable to the extent that it takes reasonable and proportionate technical measures to verify the correctness of user declaration and ensure that no SGI is published without a label.32 In situations where it is established that an SSMI “knowingly” permitted uploading of or failed to act upon Prohibited SGI or unlabelled SGI, it will be deemed to have failed to exercise due diligence.33 

The Amendment Rules do not clearly articulate the threshold for “knowledge” in this context, nor do they clarify whether an inadvertent failure to detect SGI by itself would jeopardize safe harbour protection. Automated detection tools may generate both false positives and false negatives. Manual review at scale is operationally burdensome. In the absence of clearer guidance, intermediaries may adopt a risk-averse approach, including over-labelling content as SGI to mitigate liability exposure, resulting in user desensitization and undermining the regulatory objective of the amendments.

VI. Mandatory Reporting Obligations

Intermediaries must inform users that, if their conduct on the platform amounts to an offence under applicable laws and such offences require mandatory reporting, the matter may be reported to the appropriate authorities. Platforms which enable the creation of SGI must specifically clarify that violations involving such content may also trigger mandatory reporting, where required by law. The Amendment Rules reference the Bharatiya Nagarik Suraksha Sanhita, 2023, (“BNSS”) and Protection of Children from Sexual Offences Act, 2012 (“POCSO Act”) which contain mandatory reporting for certain reportable offenses.34

On a closer reading, such mandatory reporting obligations always existed on the intermediary platforms and were also recently brought up in MeitY’s notice to Grok8. However, the Amendment Rules now explicitly require intermediaries, whether they offer tools to generate SGI or not, to inform their users about such mandatory obligations on the intermediaries. For clarity, the Amendment Rules do not create any obligation on the users to report, although such reporting obligations on the users separately exist under the same provisions in law.9

VII. Safe Harbour

The Amendment Rules clarify that an intermediary removing or disabling access to any information, including SGI, in compliance with the IT Rules, 2021 will not be treated as a breach of the conditions for safe harbour under Section 79(2)(a) or (b) of the IT Act.35

As a compliance safeguard, SSMIs should consider documenting and disclosing, through their periodic compliance reports, the measures implemented to identify and address SGI. Transparent reporting may assist in demonstrating good-faith efforts and reasonable due diligence, even where certain instances of SGI evade detection.

VIII. Revised Timelines

The Amendment Rules have substantially tightened the timelines for compliance with the provisions of the IT Rules, 2021, including those related to content takedown and user grievances. The revised timelines are as follows:

 Compliance

 Old Timeline

Revised Timeline

Removal of content upon receiving “actual knowledge”.36

36 hours from receipt of such “actual knowledge”.

3 hours from receipt of such “actual knowledge”.

Removal of content which appears to contain sexually explicit material including images which exposes private parts of the individuals, show full or partial nudity etc. including artificially morphed images, upon receiving a complaint made by the individual.37

24 hours from the receipt of complaint made by the individual.

2 hours from the receipt of complaint made by the individual.

Resolving general user grievances.38 

15 days from the receipt of complaint.

7 days from the receipt of complaint.

Resolving user grievances containing requests for removal of content relating to Rule 3(1)(b) [other than those falling within sub-clauses (i), (iv) and (xi)]39.40

72 hours from receipt of complaint.

36 hours from receipt of complaint.

Notifying users through the intermediaries’ rules and regulations, privacy policy or user agreement, of the consequences of non-compliance with provisions of the IT Rules, 2021.41

Once every 1 year.

Once every 3 months.

 

This tightened compliance will necessitate a prompt review of internal compliance processes by intermediaries, including adjustments to notice-and-takedown and content moderation systems. Given that failure to adhere to prescribed timelines may expose intermediaries to the potential loss of safe harbour protections under the IT Act, platforms may adopt a more cautious approach to content moderation to mitigate regulatory risk, where limited time to independently assess the validity of the request may lead to broad based takedowns. Some industry bodies have sought stakeholder consultations with MeitY on the accelerated compliance requirements.

IX. Conclusion

The Amendment Rules are a good step towards addressing the online harm caused by SGI, while continuing to permit its responsible use in support of innovation, accessibility and economic growth. It aligns with the Government of India’s broad sector-specific approach to regulating AI through targeted guardrails rather than overarching restrictions. However, the Amendment Rules introduce materially enhanced compliance obligations for intermediaries, which will require them to undertake technical adjustments, update policies, train moderation teams and deploy detection tools appropriately. In this context, the prescribed 10-day implementation timeline may present practical challenges for platforms seeking to operationalize the changes in a structured and legally compliant manner.

 


Tanishq Gupta, Prerana Reddy and Aaron Kamath
You can direct your queries or comments to the authors.



1See; https://www.meity.gov.in/static/uploads/2026/02/f55fe52418b03f58b0669f6a8bc03b6d.pdf (last accessed on February 18, 2026).

2See: https://www.meity.gov.in/static/uploads/2025/10/065b6deb585441b5ccdf8be42502a49c.pdf (last accessed on February 18, 2026

3See: https://www.meity.gov.in/static/uploads/2025/10/9de47fb06522b9e40a61e4731bc7de51.pdf (last accessed on February 18, 2026)

4Rule 2(wa), IT Rules, 2021.

5FAQ No. 5.

6FAQ No. 8.

7Rule 2(wa)(a), IT Rules, 2021.

8FAQ No. 6. Other examples include: compressing a video for faster upload, removing background noise in an audio recording, stabilising a shaky video or correcting colour balance.

9Rule 2(wa)(b), IT Rules, 2021.

10FAQ No. 6.

11Rule 2(wa)(c), IT Rules, 2021.

12FAQ No. 6(c).

13Rule 3(1)(b) (taking reasonable efforts to prevent users from posting harmful content), Rule 3(1)(d) (removing content upon government notification or court order), and Rules 4(2) and 4(4) (additional due diligence obligations for SSMIs, including tracing and monitoring certain unlawful content)

14Notifications are to be issued in English or any other language in the Eighth Schedule of the Indian Constitution, which recognizes twenty-two languages as official languages. These are Assamese, Bengali, Bodo, Dogri, Gujrati, Hindi, Kannada, Kashmiri, Konkani, Maithili, Malayalam, Manipuri, Marathi, Nepali, Odia, Punjabi, Sanskrit, Santhali. Sindhi, Tamil, Telugu and Urdu.

15Discussed below in -

16Rule 3(1)(c), IT Rules, 2021.

17Rule 3(1)(ca), IT Rules, 2021.

18Such as Bharatiya Nyaya Sanhita, 2023 (45 of 2023), the Protection of Children from Sexual Offences Act, 2012 (32 of 2012), the Representation of the People Act, 1951 (43 of 1951), the Indecent Representation of Women (Prohibition) Act, 1986 (60 of 1986), the Sexual Harassment of Women at Workplace (Prevention, Prohibition And Redressal) Act, 2013 (14 of 2013), and the Immoral Traffic (Prevention) Act, 1956 (104 of 1956);

19Rule 3 (1) (ca) (ii) (I), IT Rules, 2021.

20Rule 3 (1) (ca) (ii) (II), IT Rules, 2021.

21Rule 3 (1) (ca) (ii) (III), IT Rules, 2021.

22Rule 3 (1) (ca) (ii) (IV), IT Rules, 2021.

23‘Actual knowledge’ of a violation is when it is received by an intermediary in the form of an order by a court of competent jurisdiction or on being notified by the Appropriate Government or its agency under clause (b) of sub-section (3) of section 79 of the IT Act.

24Rule 3 (1) (cb), IT Rules, 2021.

25Rule 3(3)(i), IT Rules 2021.

26FAQ 21.

27Rule 3(3)(ii), IT Rules 2021.

28‘Significant social media intermediary’ means a social media intermediary having number of registered users in India above such threshold as notified by the Central Government; at present, fifty lakh registered users in India has been notified as the threshold for a social media intermediary to be considered a significant social media intermediary.

29Rule 4(1A), IT Rules, 2021.

30FAQ No. 24.

31Id.

32Explanation to Rule 4 (1A), IT Rules, 2021.

33Proviso to Rule 4 (1A), IT Rules, 2021.

34Under POCSO Act, any person who has knowledge, or even an apprehension that an offense under the POCSO Act may or has been committed, has to report to the Special Juvenile Police Unit or the local police in the form and manner as prescribed. Failure to report under this provision is punishable with both fine and imprisonment. 

Under the BNSS, every person who is aware of the commission, or the intention of any other person to commit any offense of grievous nature, including against the State, public order including rioting, unlawful assembly, kidnapping and abduction, murder and culpable homicide etc. has an obligation to the nearest Magistrate or police officer.

35Rule 2(1B), IT Rules, 2021.

36Rule 3(1)(d), IT Rules, 2021.

37Rule 3(2)(b), IT Rules, 2021.

38Rule 3(2)(a)(i), IT Rules, 2021.

39This broadly covers the following types of information: (i) obscene, pornographic or paedophilic content; (ii) content invasive of privacy, including bodily privacy, (iii) gender-insulting or harassing content; racially or ethnically objectionable material; (iv) content promoting money laundering, gambling, harmful online games, or inciting religious/caste-based enmity and violence; (v) content harmful to children; (vi) misleading or deceptive content, including false information and government-identified fake news; (vii) impersonation of another person; (viii) content threatening India’s sovereignty, security, public order, or inciting cognisable offences, (ix) malware or malicious code, and (x) unverified online games and advertisements/promotions of impermissible online games.

40Rule 3(2)(a)(i), IT Rules, 2021.

41Rule 3(1)(c), IT Rules, 2021.

Mission and Vision


Distinctly Different

What's New


Trust issues in gifts to relatives: ITAT Clarifies the Scope of Section 56(2)(x)
Tax Hotline: March 06, 2026
Streaming Without Barriers: Ministry of Information and Broadcasting Releases OTT Accessibility Guidelines
Technology Law Analysis: March 05, 2026

Events


Webinar

Decoding India’s 2026 Budget & Global Competitiveness
February 04, 2026

Seminar

Exclusive Lunch Dialogue with New Jersey Governor Phil Murphy
September 24, 2025

Round Table

From Traction to Transaction: Bridging the Gap – Co-creating the Next Era of Innovation, Investment & Global Leadership hosted by Primus Partners and in partnership with the Meridian International Centre - iCET x Deeptech in Defense: How and When?
October 29, 2025

News Roundup


News Articles

Nishith Desai Launches Boston Desk, Third U.S Branch
November 25,2025

Quotes

Multiple IPOs Likely To Hit GIFT City Stock Exchanges This Year
August 26,2025

Newsletters


Tax Hotline

Trust issues in gifts to relatives: ITAT Clarifies the Scope of Section 56(2)(x)
March 06, 2026


Technology Law Analysis

Streaming Without Barriers: Ministry of Information and Broadcasting Releases OTT Accessibility Guidelines
March 05, 2026


M&A Hotline

When Speed Meets Structure: Practical Challenges in Implementing Corporate Restructurings under Companies Act, 2013
March 04, 2026


  • Disclaimer
  • Content
  • Feedback
  • Walkthrough
  • Subscribe
Nishith Desai Associates ©2026 All rights reserved.