In early 2026, the Ministry of Electronics and Information Technology (MeitY) finalized a landmark directive mandating the explicit labeling of all AI-generated content across social media and digital platforms. This mandate is coupled with a significant reduction in the mandatory content takedown timeline, now slashed to just 2–3 hours for sensitive material. These rules aim to combat the rapid spread of deepfakes and misinformation, ensuring that the digital ecosystem remains trustworthy as AI-generated media becomes increasingly indistinguishable from reality.
Under the new mandate, platforms must ensure that any image, video, or text generated through Artificial Intelligence carries a visible label and a permanent metadata watermark. This "Digital Fingerprinting" is designed to allow users to immediately identify synthetic media, thereby preventing the malicious use of AI for identity theft or political misinformation. Platforms that fail to implement these labeling standards face severe penalties, including the potential withdrawal of "safe harbour" protection under the IT Act.
The most disruptive component of the 2026 rules is the requirement for "Significant Social Media Intermediaries" to remove flagged content within a window of 2 to 3 hours. This represents a massive acceleration from the previous 24-to-36-hour limit. The ministry argues that in a hyper-connected environment, the viral spread of harmful content occurs in minutes, making traditional takedown windows obsolete. To comply, platforms are now expected to deploy robust AI-driven automated moderation tools alongside localized 24/7 compliance centers.