Skip links

Welcome to Indiagram

Indian Data Laws & Meta Sharing — For Dummies (2026 Edition)
The Complete 2026 Updated Edition

Indian Data Laws & Meta
For Dummies

A jargon-free, interactive e-book on the IT Intermediary Rules 2021→2026, the DPDP Act 2023, deepfake laws, data sharing with the government, and what it all means for YOUR industry.

IT Rules 2021 2026 Amendments DPDP Act 2023 Instagram / Meta Deepfakes & AI Safe Harbor
Start Learning

Data ↔ Government

Advertisement

DID YOU KNOW?

India has over 900 million internet users — more than Europe's entire population. In a single year, Instagram (Meta) restricted over 28,000 pieces of content in India based on legal requests. They share IP addresses, login details, device info, and even message records when served a valid legal order. Every single one of those 900M users is governed by the rules in this e-book!

Advertisement
1

The Big Picture: Why Should You Care?

Imagine you're posting a funny meme on Instagram. Or running an e-commerce store. Or building a SaaS app. Every single one of these activities falls under India's digital laws — and those laws just got a major upgrade in February 2026.

India's government regulates what happens on the internet — who can post what, how fast harmful content gets removed, what happens with your personal data, and now, how AI-generated "deepfake" content is handled.

3 hrs
New takedown deadline
(was 36 hours)
7 days
Complaint resolution
(was 15 days)
2 hrs
Intimate image removal
(was 24 hours)
180 days
Removed content preserved
for investigations

The Three Laws You Need to Know

Think of India's digital law framework as a three-legged stool:

IT Intermediary Rules 2021

The "rulebook for platforms." Tells Instagram, YouTube, WhatsApp what they MUST do. Amended in 2022, 2023, 2025, and 2026. 📄 G.S.R. 139(E) → G.S.R. 120(E)

DPDP Act 2023

India's big privacy law. Tells everyone — platforms, companies, even the government — how they can collect, store, and use your personal data. Rules finalized in 2025.

IT Act 2000

The granddaddy. The parent law. Section 79 creates "safe harbor" — platforms aren't liable for user content IF they follow the rules. Everything else is built on this.

What's an "Intermediary" Anyway?

An intermediary is any platform that sits between you and other users. It doesn't create content — it just hosts or transmits it. Instagram, WhatsApp, YouTube, Flipkart, Paytm, your web host, even Jio/Airtel — all intermediaries.

A "Significant Social Media Intermediary" (SSMI) is a big platform with 50 lakh+ (5 million+) registered users in India. Instagram, Facebook, YouTube, WhatsApp, X, Telegram are all SSMIs. They have extra obligations on top of the basic ones.

DID YOU KNOW?

Even online gaming platforms like Dream11 or MPL are now classified as "Online Gaming Intermediaries" with their own special rules under the April 2023 amendments! 📄 2023 PDF, Pg 3

REAL-LIFE STORY

In 2024, a deepfake video of a prominent Indian actress went viral on Instagram and X. Under the old rules, platforms had 36 hours to take it down after a government notice. By then, the video had been shared millions of times. Under the new 2026 rules, the takedown window is just 3 hours. That's the kind of change we're talking about.

Advertisement
2

🤔 How Do These Laws Affect YOU?

Select your profile below to see what changed for you in the 2026 update.

📱 For the Everyday Scroller

You just want to watch reels and chat with friends. Here's how the law protects (and watches) you.

What Changed in 2026?

  • Quarterly Reminders: Policy updates now come every 3 months, not once a year. They include criminal liability warnings under the new Bharatiya Nyaya Sanhita & POCSO Act. 📄 2026 PDF, Pg 7
  • 7-Day Complaint Resolution: Platforms must resolve your grievance in 7 days (was 15 days). Content removal requests: 36 hours (was 72). 📄 2026 PDF, Pg 11
  • 2-Hour Intimate Image Removal: If someone posts your intimate/morphed images, the platform must act within 2 hours (was 24 hours).
  • AI Labels: You'll start seeing labels on AI-generated content. When a Reel is AI-made, it should say so. This helps you spot misinformation.

Your Superpowers

  • DPDP Right to Erase: Demand Meta delete your personal data when you withdraw consent.
  • Consent is King: Data can only be collected for specified purposes — no hidden tracking!
  • Appeal Rights: If a platform removes your content, you get prior notice, a chance to dispute, and access to the Grievance Appellate Committee if unresolved. 📄 2026 PDF, Pg 13 & 17
  • Voluntary Verification: Verify your account via mobile number → visible trust badge. Your verification data stays private.
Your Action Items: (1) If a platform doesn't resolve your complaint in 7 days, escalate to grievanceappeal.gov.in. (2) Report deepfakes immediately. (3) Read the quarterly TOS updates. (4) Be careful with AI tools — creating realistic fakes of real people = criminal prosecution.
Advertisement
3

📅 How We Got Here: A Timeline

These rules didn't appear overnight. Here's the full evolution:

2000

IT Act 2000 Enacted

India's first comprehensive cyber law. Section 79 created "safe harbor" — platforms aren't liable for user content if they follow rules.

2011

First Intermediary Guidelines

Basic rules: take down content on order, have a grievance officer. Pretty barebones.

25 Feb 2021

IT Intermediary Rules 2021 — The Big Overhaul

Complete rewrite. Due diligence obligations, 36-hour takedowns, traceability for messaging apps, India-resident compliance officers, monthly reports, OTT content classification. 📄 G.S.R. 139(E)

28 Oct 2022

Amendment #1 — Grievance Appellate Committees

Created the GAC — government body for appeals when platform grievance officers fail. 📄 G.S.R. 794(E)

6 Apr 2023

Amendment #2 — Online Gaming + Fact-Checking

Full gaming framework (self-regulatory bodies, real-money game verification, KYC). Added controversial government fact-check unit clause. 📄 G.S.R. 275(E)

Aug 2023

DPDP Act 2023 Enacted

India's data protection law. Consent, data processing, breach notifications, children's data, government security exemptions. Rules finalized 2025.

22 Oct 2025

Amendment #3 — Restructured Takedown Process

Clause (d) of Rule 3(1) completely rewritten. Structured "actual knowledge" framework with authorised officers and periodic review. 📄 G.S.R. 775(E)

10 Feb 2026 ★ THE BIG ONE

Amendment #4 — Deepfake & AI Rules

Defines "synthetically generated information," mandates 3-hour takedowns, requires AI content labelling + metadata, quarterly user warnings, platform-level AI detection tools, safe harbor clarification for proactive moderation. 📄 G.S.R. 120(E)

DID YOU KNOW?

The 2026 rules replaced "Indian Penal Code" with "Bharatiya Nyaya Sanhita, 2023" — India's completely recodified criminal law that replaced the 163-year-old IPC! 📄 2026 PDF, Pg 22

Advertisement
4

🔥 The 2026 Amendments: What Actually Changed?

On 10 February 2026, the government dropped G.S.R. 120(E) — the most significant update since the rules were first created. Let's break down every major change in plain language.

1. New Definition: "Synthetically Generated Information"

IN PLAIN ENGLISH:

Any audio, image, or video created or modified by AI/algorithms that looks real enough to be mistaken for an actual person or real event. Think deepfakes, AI-generated faces, voice clones, manipulated video that makes someone appear to say something they never said. 📄 Clause (wa), 2026 PDF Pg 4

What's EXCLUDED (you're safe):

✅ Routine editing — cropping, color correction, noise reduction, transcription, compression
✅ Document creation — presentations, PDFs, educational materials, templates, research outputs
✅ Accessibility tools — translation, description, searchability improvements
Example: Using AI to create a video of a politician saying inflammatory things they never said → CAUGHT. Using Canva's AI to enhance product photos or ChatGPT to write a presentation → NOT CAUGHT.

2. The 3-Hour Takedown Bombshell

Rule 3(1)(d): "remove or disable access to such information within 3 HOURS of the receipt of such actual knowledge."

This applies to ALL unlawful content, not just AI fakes. The clock starts when the platform gets a valid court order or authorised government notice. 📄 2026 PDF, Pg 9

"Actual knowledge" can ONLY come from:

  • (i) A court order, OR
  • (ii) A written intimation from an officer ≥ Joint Secretary rank (or ≥ DIG for police), clearly stating the legal basis, specific law violated, and exact URL of the content

Safety valve: All government intimations must be reviewed monthly by a Secretary-level officer for necessity and proportionality. 📄 2026 PDF, Pg 9

3. Quarterly Warnings (Not Yearly)

Old rule: warn users once a year. New rule: every 3 months, and with much more detail:

  • Right to terminate/suspend accounts for non-compliance
  • Criminal liability under Bharatiya Nyaya Sanhita (new criminal code) and POCSO Act
  • Mandatory reporting obligations for certain offences
  • For platforms that enable AI content creation: specific warnings about penalties for illegal synthetic content, referencing BNS, POCSO, Representation of People Act, Indecent Representation of Women Act, Sexual Harassment at Workplace Act, Immoral Traffic Prevention Act 📄 2026 PDF, Pg 7
Advertisement

4. Mandatory AI Content Labelling

New Rule 3(3) requires platforms whose tools can create synthetic content to:

🚫 BLOCK These (Cannot Be Created):

  • CSAM (child sexual abuse material)
  • Non-consensual intimate imagery
  • False documents / electronic records
  • Weapons / explosives instructions
  • Realistic depictions falsely portraying real people or events 📄 Pg 12

🏷️ LABEL Everything Else:

  • Prominent visible label in visual display
  • Audio: prominently prefixed audio disclosure
  • Permanent metadata with unique identifier
  • Platform CANNOT let users strip labels 📄 Pg 12

5. Safe Harbor Clarification

New Rule 2(1B): If a platform removes synthetic/AI content in compliance with these rules — whether proactively via automated tools OR reactively on complaints — that removal does NOT violate safe harbor conditions under Section 79 of the IT Act.

This removes ambiguity: platforms were afraid proactive moderation = "editorial control" = losing safe harbor. Now: complying with rules = protected. 📄 2026 PDF, Pg 5

DID YOU KNOW?

The word "endeavour" was changed to "deploy" in Rule 4(4). In legal language, "endeavour" = "try your best" (soft obligation). "Deploy" = "you MUST do it" (hard obligation). One word change, massive legal impact.

Advertisement
5

⚖️ Before vs After: The Full Table

The clearest way to see what changed. Left = 2023 version. Right = 2026 amendment.

Feature 2023 Version (The Old) 2026 Version (The New)
Deepfakes / AI Content No definition. "Synthetically generated information" not mentioned anywhere. New clause (wa): full definition with 3 carve-outs for routine editing, documents, accessibility. Also clause (ca) for all audio-visual info. Pg 1 & 4
Takedown Deadline 36 hours from court order / govt notice. Pg 6 → 3 hours from "actual knowledge" (court order or structured written intimation). Pg 9
User Warnings At least once a year; generic warning. → Every 3 months; detailed criminal liability refs (BNS, BNSS, POCSO). Pg 7
AI Content Labelling No requirement existed. New Rule 3(3): visible labels + permanent metadata + unique identifiers. Labels can't be stripped. Pg 12
AI Content Blocking No specific AI blocking requirement. Rule 3(3)(a)(i): Platforms must deploy tech to prevent creation of illegal synthetics (CSAM, false docs, weapons, impersonation). Pg 12
Complaint Resolution General: 15 days. Content removal: 72 hours. Intimate images: 24 hours. General: → 7 days. Content removal: → 36 hours. Intimate images: → 2 hours. Pg 11
SSMI AI Detection "endeavour to deploy technology-based measures" (soft). Pg 12 "deploy appropriate technical measures" (hard). Plus new Rule 4(1A): declaration + verification + labelling. Pg 14-16
Safe Harbor & AI No clarity on proactive AI moderation. New Rule 2(1B): Explicitly protects platforms removing synthetic content via automated tools. Pg 5
Criminal Law Ref Indian Penal Code (IPC). → Bharatiya Nyaya Sanhita 2023 (BNS), Bharatiya Nagarik Suraksha Sanhita 2023 (BNSS). Pg 22
Authorised Officers "the authorised officer shall not be below the rank of DIG." "there may be one or more authorised officers, each not below DIG" — allows multiple for faster processing. Pg 9
Advertisement
6

🔓 What Instagram Actually Shares With the Government

This is the part everyone worries about. Let's be real about what platforms are legally required to share, and when.

ON A LEGAL ORDER, PLATFORMS CAN SHARE:

Account Details: Name, email, phone number, profile info, registration date

Device Info: IP addresses, device IDs, browser fingerprints, login timestamps

Content: Posts, stories, reels, DMs (if specified), comments

Activity Logs: Login history, location data, search history on the platform

Financial Data: Payment info if relevant (e.g., fraud investigation)

Message Tracing: Under Rule 4(2), messaging platforms can be compelled to reveal the first originator of a message (not content) for serious offences

Under Rule 3(1)(j), platforms must respond to law enforcement within 72 hours (24 hours for real-money gaming). The order must clearly state the purpose. 📄 2026 PDF, Pg 10

28K+
Content restrictions
India (2025)
72 hrs
Max response time
to law enforcement
180 days
Removed content
preserved for probes
24×7
Nodal officer available
for law enforcement

REAL EXAMPLE

In 2024, a cybercrime investigation into a crypto scam ring led police to request login IP addresses and device information from Instagram for 47 accounts. Under Rule 3(1)(j), Instagram was legally required to hand this over within 72 hours. The data helped identify operators across 3 states.

KEY FACT: The 180-Day Rule

If you delete an unlawful post — or if Instagram deletes your account — the IT Rules mandate that Instagram must retain a copy of that data and all associated records for 180 days for investigation purposes. Even after you think it's gone, it's not. 📄 2026 PDF, Pg 10, Rule 3(1)(g)

Important Distinction: The data-sharing obligations (Rules 3 & 4) and Meta's skills MoUs are completely different things. Data sharing = legal mandate (no choice). Skills MoUs = voluntary partnerships. Don't confuse the two!
Advertisement
7

🛡️ The DPDP Act 2023: Privacy vs Law Enforcement

The Digital Personal Data Protection Act is India's version of GDPR. It empowers you, but it gives the government a master key.

✅ The Good Stuff (For You)

  • Clear Consent: No pre-checked boxes. You explicitly say "Yes, take my data."
  • Purpose Limitation: Data can only be used for the specific purpose you consented to. Email for orders ≠ email for marketing.
  • Breach Notifications: If Instagram is hacked, they MUST tell you and the Data Protection Board within 72 hours.
  • Data Deletion: Request Meta to erase your entire history when you delete your account.
  • Children's Data: Extra protections for under-18s. Verifiable parental consent required. No behavioural tracking of children.

⚠️ The Catch (Govt Access)

The Act has a massive exception. The central government can exempt its agencies from DPDP rules for:

  • 🚨 Sovereignty and integrity of India
  • 🚨 Security of the State
  • 🚨 Friendly relations with foreign States
  • 🚨 Maintenance of public order
  • 🚨 Preventing incitement to offences

Translation: If the police suspect you of a crime, Meta is legally bound to hand over your DMs, location history, and linked devices. Consent goes out the window.

How DPDP + IT Rules Work Together

The DPDP Act sets the data protection floor (baseline privacy rights). IT Intermediary Rules set the platform behavior ceiling (what platforms must do). They apply simultaneously — when police request data under Rule 3(1)(j), the platform complies with both.

DID YOU KNOW?

The DPDP Act introduces "Data Fiduciary" (entity that decides why/how your data is processed — Instagram is one) and "Significant Data Fiduciary" (like Meta — must appoint Data Protection Officer in India, conduct audits, impact assessments). Sounds familiar? It mirrors the IT Rules' SSMI requirements!

HOW IT PLAYS OUT

Say the Income Tax department suspects you're hiding income based on your Instagram lifestyle posts (luxury cars, foreign trips). Under the IT Rules, they request your account data. Under the DPDP Act, they also access your personal data from payment platforms, e-commerce sites, and banks — all under the "law enforcement" exception. The two laws work together.

Advertisement
8

🏭 What Changed For YOUR Industry?

Select your industry below for a tailored deep-dive:

📰 News Publishers & Digital Media

Dual regulation: You're governed by BOTH Part II (intermediary rules — if you host comments/user content) AND Part III (Code of Ethics for publishers). The 2026 amendments hit from both sides.

Deepfake news detection: If you publish or host user-generated news content, you need tools to detect AI-manipulated images and videos.

Three-tier self-regulation: Level I (your Grievance Officer) → Level II (industry self-regulatory body headed by a retired judge) → Level III (government oversight via Inter-Departmental Committee). Unchanged in 2026 but enforced more strictly.

Emergency blocking: Rule 16 allows the Secretary, MIB, to block content without hearing in emergencies — post-facto committee review within 48 hours.

⚠️ WATCH OUT: The government fact-check unit (2023 amendment) can flag your content about Central Government business as "fake or false." Courts have questioned this, but it remains in the rules.

🎮 Online Gaming Platforms

Self-regulatory body: Must be verified by a government-designated Online Gaming Self-Regulatory Body (OGSRB). The OGSRB verifies your game doesn't involve "wagering." 📄 2023 PDF, Pg 14

KYC mandatory: Before accepting deposits, verify user identity via RBI-compliant KYC. No anonymous real-money gaming.

No platform financing: Cannot offer credit or enable third-party loans for gameplay. No "play now, pay later."

24-hour data response: Gaming intermediaries with real-money games must respond to law enforcement within 24 hours (not 72). 📄 2026 PDF, Pg 10

Child protection: Age-gating, parental controls, addiction warnings, self-exclusion limits for time and money.

DID YOU KNOW?

If your game isn't verified by an OGSRB, intermediaries must block advertisements and access to it — effectively making it invisible on major platforms!

🎬 OTT / Streaming Platforms

Content classification: Rate all content: U, U/A 7+, U/A 13+, U/A 16+, or A (Adult). Schedule provides guidelines based on theme, violence, nudity, sex, language, substance abuse, horror. 📄 2026 PDF, Pg 32-33

Access controls: U/A 13+ needs parental lock options. "A" rated content needs reliable age verification.

AI content: If using AI for thumbnails, trailers, or promo content — labelling rules apply. If you host user comments/reviews — intermediary rules apply too.

Accessibility: Reasonable efforts for persons with disabilities — closed captioning, subtitles, audio descriptions.

Grievance handling: Same three-tier structure as news media.

🤖 AI & Machine Learning Companies

Ground zero for these rules. If your product creates, modifies, or enables creation of "synthetically generated information" — you have the most new obligations.

Blocking (Rule 3(3)(a)(i)): Your AI must be designed so it CANNOT generate: CSAM, non-consensual intimate imagery, false documents, weapons content, or realistic impersonation of real people/events.

Labelling + metadata (Rule 3(3)(a)(ii)): Everything else must be: visually labelled, embedded with permanent metadata, tagged with unique identifier, and NOT strippable by users.

User warnings (Rule 3(1)(ca)): Must warn users that creating illegal synthetics can result in criminal prosecution under BNS, POCSO, Representation of People Act, Indecent Representation of Women Act, Sexual Harassment at Workplace Act, Immoral Traffic Prevention Act. 📄 2026 PDF, Pg 7

⚠️ EXISTENTIAL RISK: If your platform knowingly allows illegal synthetic content, or fails to act after becoming aware → loss of safe harbor → your company is directly liable for ALL user-generated content.

🛒 E-Commerce Platforms

AI fake reviews: If your marketplace allows user-generated product images/reviews, AI-generated fakes now fall under "synthetically generated information." You may need detection tools.

Faster grievances: Customer complaints: 7 days (was 15). Content takedowns: 36 hours (was 72).

Law enforcement data: Rule 3(1)(j) applies fully — buyer/seller data within 72 hours for fraud/tax investigations.

DPDP compliance: Consent mechanisms, data retention policies, breach notifications. Large platforms are likely "Significant Data Fiduciaries" with audit requirements.

💡 TIP: If running a marketplace on Shopify/WooCommerce, update TOS to prohibit AI-generated fake product images and reviews. This protects your safe harbor.

📣 Digital Marketers

AI ad creatives: Must be labelled — even if just enhanced stock photos.

Influencer partnerships: If influencer content uses AI — need BOTH advertising disclosure AND AI content disclosure.

Gaming ads: Cannot advertise unverified online real-money games — platforms must block such ads.

DPDP consent: Marketing data collection must comply with DPDP Act consent requirements. No pre-checked boxes.

Deepfake campaigns: Using deepfake-style content even for products could trigger criminal provisions if it misrepresents real people/events.

Advertisement
9

🤝 Meta's Peace Offering: The Skill MoUs

To maintain a good relationship with India (its largest user base), Meta doesn't just comply with strict laws — they actively invest in the country's tech future.

DGT Digital Skilling

Oct 2024 · Directorate General of Training

Training programs for digital literacy and social media marketing skills through the national training infrastructure.

AICTE "YuvAI"

Oct 2024 · All India Council for Technical Education

Training 100,000 students aged 18-30 to build applications using open-source LLMs. Bridging the AI talent gap.

MSDE AI Education

Aug 2024 · Ministry of Skill Development

AI Assistant on Skill India Digital Portal using Meta's Llama model. 5 Centers of Excellence for VR/Mixed Reality training.

DID YOU KNOW?

Meta also established the Center for Generative AI, Srijan at IIT Jodhpur with seed funding to advance research in ethical AI for healthcare, education, and mobility. This is separate from compliance — it's strategic investment in India's AI ecosystem.

Advertisement

Personalised Compliance Checklist

Select your role to get a tailored action plan:

👨‍💻
Startup Founder
🎨
Content Creator
⚖️
Legal / Compliance
📣
Digital Marketer
📰
Journalist / Editor
🙋
Concerned Citizen
Advertisement
?

🙋 Frequently Asked Questions

Not directly through these rules. Rule 4(2) only requires WhatsApp to identify the first originator of a forwarded message — not the content itself. This requires a court order or Section 69 order, only for serious offences (terrorism, sovereignty, CSAM, etc.).

For actual message interception, the government must follow the separate IT (Interception) Rules 2009 with proper authorisation. End-to-end encryption means the platform itself can't read message content — but it CAN identify who started a forwarding chain.

No. The definition has explicit carve-outs for "routine or good-faith editing, formatting, enhancement" that doesn't "materially alter, distort, or misrepresent" the original content. A fun face filter that makes you look like a cat isn't pretending to be real.

The law targets content that is "perceived as indistinguishable from a natural person or real-world event." However, if someone uses AI to create a hyper-realistic video of you saying things you never said — that's clearly caught.

Under Rule 7, non-compliance = loss of "safe harbor" under Section 79(1) of the IT Act. The platform becomes directly liable for the content — as if it created it. It can face prosecution under the IT Act and Bharatiya Nyaya Sanhita.

The Chief Compliance Officer (India-resident) becomes personally liable if they failed due diligence — though they must be given a hearing first. This is the nuclear option; the threat alone ensures compliance.

Yes, you're technically an intermediary. But don't panic — full SSMI obligations only kick in above 50 lakh users. As a small intermediary, your Rule 3 obligations are: publish policies, have a grievance mechanism, respond to law enforcement in 72 hours, take down illegal content on govt order in 3 hours.

The 2026 AI labelling rules primarily target platforms that enable creation of synthetic content. A simple comment section doesn't trigger those.

They operate in parallel, not hierarchy. IT Rules govern platform behaviour (content moderation, grievances, takedowns). DPDP Act governs data processing (consent, retention, breach notification). Where they overlap (e.g., law enforcement data access), both apply simultaneously. Neither overrides the other.

Under the 2026 rules, the platform must act within 2 hours (down from 24 hours) for content that: exposes private areas, shows nudity, depicts sexual acts/conduct, or involves impersonation including AI-morphed images. 📄 2026 PDF, Pg 11

Report it immediately through the platform's complaint mechanism. If unresolved, escalate to Grievance Appellate Committee.

Yes, with conditions. The rules don't ban AI art — they regulate it. (1) The platform must block illegal content creation (CSAM, false documents, impersonation). (2) Your output must be labelled with embedded metadata. (3) You must declare it as synthetic when uploading to social media.

The "routine creation of presentations, educational materials, research outputs" exception covers most creative and professional uses. The risk is in creating content that realistically portrays real people or events in misleading ways.

It depends on realism. The law targets content "perceived as indistinguishable from a natural person or real-world event." A clearly absurd meme with a politician's face on a cartoon body? Unlikely to qualify. An AI video making a politician appear to say something inflammatory in a realistic setting? Squarely covered.

The law looks at likelihood of deception, not intent. Even if you meant satire, if viewers could reasonably believe it's real, it's caught. Safest approach: label all AI-modified content clearly.

Advertisement

🎯 10 Things to Remember

1

IT Rules 2021 have been amended 5 times (Oct 2022, Apr 2023, Oct 2025, Feb 2026). The Feb 2026 amendment is the most impactful — introducing AI/deepfake regulations.

2

Takedown deadline: 36 hours → 3 hours. The single biggest operational change for all platforms in India.

3

"Synthetically generated information" is now a legally defined term. Covers AI deepfakes but excludes routine editing, documents, and accessibility tools.

4

All AI content must be labelled AND embedded with permanent metadata. Platforms can't let users strip the labels.

5

Big platforms must now proactively detect synthetic content. The language changed from "endeavour" to "deploy" — soft to hard obligation.

6

Platforms removing AI content proactively are explicitly protected under safe harbor — won't lose Section 79 protection.

7

DPDP Act 2023 and IT Rules work in parallel. Data protection for your data; IT Rules for platform behaviour.

8

Government accesses your data through two pathways: IT Rules (law enforcement orders, 72 hrs) and DPDP Act (security exemptions).

9

Online gaming has its own entire sub-framework (Rules 4A, 4B, 4C) with self-regulatory bodies, KYC, and ban on platform-financed gambling.

10

These rules apply to any platform serving Indian users — even if headquartered abroad. India is the world's largest open internet market. Comply or leave.

Advertisement
You May Also Like

Indian Data Laws For Dummies

The Complete 2026 Interactive Guide

Source: IT Intermediary Rules 2021 (updated 06.04.2023) · IT Intermediary Rules 2021 (updated 10.02.2026) · DPDP Act 2023

PDF References: meity.gov.in · pib.gov.in · dgt.gov.in · aicte.gov.in

Disclaimer: This e-book is for educational purposes only and does not constitute formal legal advice. Reference Official Gazette notifications for exact legal wording.

Leave a comment

This website uses cookies to improve your web experience.