
On November 5, 2025, the Ministry of Electronics and Information Technology (“MeitY”) officially unveiled the India AI Governance Guidelines under the IndiaAI Mission. This much-anticipated framework provides a comprehensive blueprint for ethical and responsible AI deployment across sectors in India. Rather than imposing a new law for artificial intelligence, the guidelines advocate a “lightweight” and adaptive regulatory approach that leverages existing laws and promotes innovation with appropriate safeguards. IT Secretary S. Krishnan explained at the launch, “India has consciously chosen not to lead with regulation but to encourage innovation while studying global approaches. Wherever possible, we will rely on existing laws and frameworks rather than rush into new legislation.”. In essence, the government’s focus is to foster AI growth under current legal regimes, intervening with new rules only when truly necessary to protect citizens.
The final Guidelines build on the January 2025 draft report, which proposed a regulatory framework aimed at enabling responsible AI development alongside industry-led self-regulation and were prepared under the guidance of the Principal Scientific Adviser to the Government of India (Prof. Ajay Kumar Sood) in a multi-stakeholder effort involving government, industry, academia, and civil society. The result is a principle-based, techno-legal governance framework that aims to balance “AI for All” innovation with accountability and safety.
The Guidelines are structured in four parts:
(1) Seven core guiding principles (“sutras”) forming the philosophical foundation;
(2) Key recommendations across six governance pillars;
(3) An Action Plan with short-, medium-, and long-term steps; and
(4) Practical guidelines for industry and regulators on implementation.
Emphasizing Adaptation Over New Legislation
A central theme of the Guidelines is that India will not enact a dedicated AI law at this stage but will instead adapt and update existing legal frameworks to address AI-related challenges. This adaptive approach is rooted in the belief that many AI risks (such as bias, misinformation, or even deepfakes) can be managed under current statutes with some reinterpretation or amendments. For example, the committee noted that India’s Information Technology Act, 2000 and even the new Bharatiya Nyaya Sanhita, 2023 (which replaces the Indian Penal Code, 1860) already contain provisions that could apply to malicious uses of AI-generated deepfakes. Likewise, the Digital Personal Data Protection Act, 2023 (“DPDP Act”) governs the use of personal data for training AI models, meaning companies must obtain consent before using individuals’ data to train AI, or else risk violating data protection rules. Also, the Consumer Protection Act, 2019, safeguards consumers from unfair trade practices, misleading advertisements, and service deficiencies and its provisions apply when AI-enabled systems engage in any of the stated violations.
Instead of rushing a new AI-specific law, MeitY’s framework will tweak definitions in existing laws to cover AI contexts. For instance, one proposal is to update the definition of “intermediary” under the IT Act, and assess how it would apply, for instance, to modern AI systems which generate data based on user prompts or even autonomously, and which refine their outputs through continuous learning. This raises the question of whether AI service providers (like generative AI platforms) should enjoy the same “safe harbour” protections as intermediaries under Section 79 of the IT Act or face new accountability obligations. The Guidelines signal that India will examine such gaps and, where needed, amend existing laws or introduce targeted amendments rather than immediately drafting an overarching AI Act. The emphasis is on “techno-legal” solution, embedding legal safeguards and ethical principles into technology design itself, so that compliance becomes automatically enforceable, reducing the need for after-the-fact enforcement.
The Guidelines also recommends a coordinated, whole-of-government approach to effectively manage AI policy and anticipate future developments. Considering AI’s cross-sectoral impact, limited regulatory capacity, and the absence of a dedicated regulator for emerging technologies, India’s AI governance would be strengthened by institutional collaboration by bringing together key ministries, sectoral regulators, and standards bodies to jointly design and implement policy frameworks aligned with the objectives of responsible AI governance.
The Seven Guiding Principles (“Sutras”) of Responsible AI
The India AI Governance Guidelines outline seven guiding principles that articulate India’s AI governance philosophy. Adapted from the RBI’s August 2025 Framework for Responsible and Ethical Enablement of Artificial Intelligence (“FREE-AI Committee”) report, these seven ‘sutras’ provide AI development and risk mitigation in the financial sector outline below:
Key Recommendations across Six Governance Pillars
The Committee, guided by the seven sutras, proposes an AI governance approach that drives innovation and progress while mitigating risks. Governance should extend beyond regulation to include education, infrastructure, diplomacy, and institution building across six key pillars outlined below:
Applicability of existing laws– The Committee has reviewed the existing system of laws and regulations in India including those governing information technology, data protection, intellectual property, competition, media, employment, consumer, and criminal law and finds that many of the risks arising from AI can be addressed through existing frameworks. For example, deepfakes used to impersonate individuals can be regulated under the Information Technology Act, 2000, and the Bharatiya Nyaya Sanhita, while the use of personal data without consent for training AI models is governed by the Digital Personal Data Protection Act, 2023. The Committee also notes the need for a comprehensive review of laws such as the Pre-Conception and Pre-Natal Diagnostic Techniques (“PC–PNDT”) Act to address emerging AI applications like radiology analysis, which could be misused for unlawful sex selection. In priority sectors such as finance, where AI adoption is growing rapidly, the Committee recommends that potential regulatory gaps be promptly identified and addressed through targeted legal amendments and sector-specific rules.
There are ongoing deliberations on key areas of AI regulation, including classification, liability, data protection, content authentication, and copyright –
To operationalise this, the Committee recommends a comprehensive approach that includes: (i) developing a risk classification framework for India, (ii) creating a national AI incident database for real-time monitoring and empirical analysis, (iii) promoting voluntary frameworks like industry codes and self-certifications to encourage responsible innovation, (iv) adopting techno-legal approaches that embed compliance within system design, including DEPA for AI training, and (v) ensuring human oversight and automated safeguards to prevent loss of control. Together, these measures create a layered risk mitigation system, combining legal, technical, and ethical tools, to promote safe, trustworthy, and inclusive AI governance in India.
The Guidelines also push for greater transparency about AI systems, companies should publish AI transparency reports and disclose how they operate and manage AI risks, enabling regulators and the public to scrutinize compliance. Additionally, grievance redressal mechanisms must be established by AI service providers so that individuals can report AI-related harms or concerns easily. Organisations should maintain accessible, multilingual grievance redressal systems that respond promptly and use feedback to improve AI products. These mechanisms must operate independently from the proposed AI Incidents Database. All these measures aim to make AI deployments accountable and responsive to oversight.
The Committee recommends clarifying how developers, deployers, and end-users are governed under existing laws like the IT Act, with obligations proportionate to their role and risk. It calls for clear enforcement guidance, effective grievance redressal with feedback loops, and voluntary accountability measures such as self-certifications and audits. Greater transparency across the AI value chain is also essential for informed and consistent regulatory oversight.
Phased Implementation: Short, Medium, and Long-Term Roadmap
The Guidelines lay down a concrete Action Plan that provides short-term, medium-term, and long-term steps to operationalize the framework. This phased roadmap is important for businesses and stakeholders to understand the timeline of changes and prepare accordingly. Key action items in each timeframe and the expected outcomes are as follows:
Action Items:
Expected Outcomes:
Action Items:
Expected Outcomes:
Action Items:
Expected Outcomes:
Practical Guidelines for Industry and Regulators
For Industry (AI Developers & Deployers): The Guidelines urge companies working with AI to proactively adopt responsible AI practices now. In particular, industry actors should:
For Regulators and Government Agencies: The Committee recommends the following guiding principles for policy formulation and implementation by agencies and sectoral regulators within their respective domains:
Implications for Businesses and Next Steps
While the Guidelines themselves are non-binding, they are highly indicative of the government’s expectations and future regulatory plans. Businesses should take note of several implications:
The release of the India AI Governance Guidelines is a significant milestone in the country’s journey to harness artificial intelligence for inclusive growth while ensuring it remains safe and trustworthy. By articulating clear principles and a structured action plan, the government has given stakeholders a roadmap of what to expect in AI policy over the coming years. In the coming months, we can expect further clarity as the recommended bodies (AIGG, AISI, etc.) will be set up, regulatory gap analysis will be conducted with corresponding amendments to laws, and a master circular will be published with applicable regulations and best practices to support compliance.
This article was originally published on Sai Krishna Associates

