

In a significant policy move aimed at strengthening digital accountability, the Ministry of Electronics and Information Technology has issued a fresh advisory to intermediaries and online platforms regarding the deployment and governance of Artificial Intelligence. The directive underscores the governmentโs intent to ensure that AI driven systems operating in India adhere strictly to existing legal frameworks, particularly the Information Technology Act, 2000 and the IT Rules, 2021.
The advisory places clear responsibility on intermediaries deploying generative AI models, large language models, and algorithmic decision systems. Platforms have been reminded that safe harbour protections are contingent upon compliance with due diligence obligations. Any AI tool that generates unlawful, harmful, or misleading content could expose intermediaries to legal consequences if adequate safeguards are not in place.
A central focus of the advisory is the prevention of misinformation, deepfakes, impersonation, and content that threatens public order or national security. MeitY has emphasized that AI systems must not facilitate the creation or dissemination of content prohibited under Indian law. Platforms are expected to proactively implement guardrails to prevent the generation of unlawful material, including defamatory, obscene, or hate speech content.
The ministry has also called for transparency in AI outputs. Intermediaries deploying generative AI solutions are advised to clearly label synthetic or AI generated content wherever applicable. This move aims to curb confusion among users and reduce the risks associated with manipulated media. By promoting disclosure, the government seeks to empower users with contextual clarity about the nature of the content they consume.
Another important dimension of the advisory is user protection. Platforms must establish robust grievance redressal mechanisms and ensure timely response to user complaints concerning AI generated content. The advisory reiterates that intermediaries remain accountable for ensuring that their systems do not violate the rights of individuals, including privacy and data protection standards.
Further, the government has advised platforms to conduct thorough testing and risk assessments before public deployment of AI tools. This includes evaluating potential biases, harmful outputs, and systemic vulnerabilities. Responsible innovation, according to the ministry, must go hand in hand with regulatory compliance and ethical safeguards.
The advisory reflects a broader policy direction in which India is seeking to balance innovation with accountability. As AI adoption accelerates across sectors including social media, e commerce, finance, and governance, regulatory clarity becomes critical. By reinforcing existing legal obligations rather than introducing an entirely new law, MeitY signals that AI systems are not outside the ambit of current statutory frameworks.
Key Highlights
Intermediaries deploying AI tools must comply with the IT Act, 2000 and IT Rules, 2021
Safe harbour protection remains conditional on due diligence and proactive compliance
Platforms must prevent AI generated misinformation, deepfakes, impersonation, and unlawful content
Clear labelling of synthetic or AI generated content is encouraged to enhance transparency
Robust grievance redressal mechanisms are mandatory for addressing user complaints
Companies are advised to conduct thorough risk assessments before public deployment of AI systems
Emphasis on balancing innovation with accountability and user protection
๐๐ญ๐๐ฒ ๐ข๐ง๐๐จ๐ซ๐ฆ๐๐ ๐ฐ๐ข๐ญ๐ก ๐จ๐ฎ๐ซ ๐ฅ๐๐ญ๐๐ฌ๐ญ ๐ฎ๐ฉ๐๐๐ญ๐๐ฌ ๐๐ฒ ๐ฃ๐จ๐ข๐ง๐ข๐ง๐ ๐ญ๐ก๐ WhatsApp Channel now! ๐๐ฒ
๐ญ๐๐๐๐๐ ๐ถ๐๐ ๐บ๐๐๐๐๐ ๐ด๐๐ ๐๐ ๐ท๐๐๐๐ฌ ๐ Facebook, LinkedIn, Twitter, Instagram