Subscribe to get latest news delivered straight to your inbox


    MeitY has issued a Report on AI Governance Guidelines Development for Public Consultation

    • 23.01.2025
    • By Rishikaa & Sapna Singh
    Saikrishna & Associates

    The Subcommittee constituted by the Ministry of Electronics and Information Technology (“MeitY”) has issued a Report on AI Governance Guidelines Development (“AI Governance Report”) for public consultation.

    The Subcommittee on AI Governance and Guidelines Development, constituted under the guidance of the Advisory Group chaired by the Principal Scientific Advisor, has published this report with the aim of examining key issues pertaining to AI governance in India, conducting a gap analysis of existing frameworks, and proposing recommendations to shape AI governance in India.

    Overview of the AI Governance Report

      • Proposed AI Governance Principles

    The AI Governance Report proposes AI governance principles that are aligned with contemporaneous global and industry standards. These principles are as follows:

      • Transparency: Adequate information should be provided to enable users to understand the AI system they are dealing with.
      • Accountability: Developers and deployers of AI systems should take responsibility for the functioning and outcome of their systems.
      • Safety, reliability & robustness: AI systems should be made resilient to risks, errors, misuse, etc. and should be regularly monitored.
      • Privacy & Security: AI systems should comply with applicable data protection laws and maintain users’ privacy.
      • Fairness & non-discrimination: AI systems should be fair and inclusive to all and should not perpetuate biases and prejudices.
      • Human-centred values & ‘do not harm’: AI systems should be subject to human oversight and should operate in a manner that respects the rule of law and minimises adverse outcomes on society.
      • Inclusive & sustainable innovation: AI system should be used to benefit all and to deliver on SDGs.
      • Digital by design governance: Suitable digital technologies must be leveraged to govern, regulate and ensure compliance with applicable law by AI systems.
    • Gap Analysis of Existing Frameworks
      • For the purposes of the AI Governance Report, the Subcommittee conducted a gap analysis to identify the areas that need to be examined in light of the existing laws to deal with the risks and harms of AI systems and enable effective compliance with existing laws:
        • Deepfakes and malicious content: As per the AI Governance Report, there are existing legal safeguards to protect from the misuse of malicious deepfakes such as the Information Technology Act, 2000 and Bharatiya Nyaya (Second) Sanhita etc. which are sufficient to detect, prevent, remove and prosecute the creation and distribution of malicious deepfakes. However, there is a need to identify technical measures to allow timely detection of deepfakes, and removal before serious harm is caused. For this, the Subcommittee suggests measures like assigning unique and immutable identifiers for different participants and watermarking throughout the lifecycle.
        • Cybersecurity: The AI Governance Report mentions that the CERT-In Rules and the CERT-In Directions, Information Technology (National Critical Information Infrastructure Protection Centre and Manner of Performing Functions and Duties) Rules, 2013, as well as the upcoming Digital Personal Data Protection Act, 2023 read with the sectoral laws constitute the cybersecurity laws in the country. However, there is a need to strengthen the application of this framework in the context of AI systems.
        • Intellectual Property: The Subcommittee reviewed two areas of copyright law, namely training models on copyrighted data and liability infringement and copyrightability of work generated by using foundation models. The report highlights the need for legal clarity regarding the use of copyrighted work in AI training and the generation of content. It also seeks suggestions on the guardrails that are required to address issues such as the use of copyrighted material without taking due approval, infringement of rights, the scope of such rights, the amount of human intervention necessary to qualify the developer of AI-generated content as the author etc.
        • AI-led bias and discrimination: The report highlights the need to understand whether the current laws are equipped to deal with the harms of bias and discrimination.
        • Other areas: The AI Governance Report also highlights areas, namely training of AI models on copyrighted materials, antitrust, definition of AI and safe harbour, that require policy position and technological developments before proposing government mechanisms for the same.
      • Additionally, the report stressed the need for transparency and responsibility across AI ecosystem and proposed an ecosystem view to understand the development and deployment of AI systems particularly in sensitive use cases.
      • The AI Governance Report also highlighted the fragmented approach undertaken by regulators and departments to deal with the harms of AI and stated that while this is also required given the specialisation of sectors, a holistic approach is the need of the hour.
    • Recommendations:
      • Whole-of-Government Approach: Establish an Inter-Ministerial AI Coordination Committee/ Governance Group (“Committee/Group”) to implement a ‘whole of government approach’ to unify efforts of various AI governance authorities and institutions at the national level. The AI Governance Report proposes to include both government officials and non-government representatives, including experts from industry and academia.
      • Technical Secretariat: Set up a Technical Secretariat within the MeitY to support the Committee/Group by serving as a technical advisory body combining expertise from multiple disciplines by mapping India’s AI ecosystem, monitoring AI developments and working with the industry on innovative solutions for AI governance, identifying regulatory gaps and capacity-building needs.
      • AI Incident Database: Establish an AI incident database to monitor real-world AI problems and their impacts to mitigate and avoid recurring outcomes.
      • Industry engagement: Engage with the industry to adopt voluntary transparency commitments, focusing on, inter alia, disclosure of intended purposes, regular transparency reporting, processes to test and monitor data quality, model robustness and outcomes, validating data quality and governance measures, processes to ensure peer review by third party qualified experts, etc. in a manner that complements the existing legal framework.
      • Suitability of technological measures: Examine the appropriateness and viability of the technical measures to strengthen compliance and enforcement tools across AI ecosystem and related risks.
      • Formation of a sub-group to propose legislation: Form a Sub-Group to work with MeitY on strengthening the legal framework for digital industries under the proposed Digital India Act (DIA) and enhance its effectiveness in managing AI and digital technology risks.
    Our Take

    The AI Governance Report appears to be a precursor to the upcoming developments in the field of governance of AI systems and technologies in India. The report also recommends deliberations on appropriate requirements under the proposed Digital India Act, which seeks to replace the Information Technology Act, 2000 and regulate emerging technologies, including AI in India. This is a step forward as there has been no significant development on the DIA since the stakeholder meetings on the proposed legislation in March and May 2023. The AI Governance Report emphasizes a balanced approach that combines legal regulation with self-governance and underscores the importance of collaboration among multiple stakeholders and regulators.

    It includes multiple promising elements, such as a focus on developing technological solutions for AI risks, the creation of an AI incident database for continuous learning, and support for cross-sectoral collaboration.

    Furthermore, the AI Governance Report attempts to align with international and industry standards by proposing broad principles for responsible AI governance.

    However, the AI Governance Report, at this stage, provides cursory measures that fall short of providing details in certain critical areas such as:

    • It does not provide a discussion on global advancements and best practices in AI governance, offering an incomplete perspective on international developments and expectations. Instead, it places significant responsibilities on the Inter-Ministerial AI Coordination Committee and the Technical Secretariat to assess technology development, deployment, and global initiatives without providing adequate support mechanisms or providing definite solutions.
    • Additionally, the report adopts a light-touch approach, mentioning key issues without exploring them in detail or providing definite solutions.
    • Moreover, while the AI Governance Report stipulates the roles of the Inter-Ministerial AI Coordination Committee and the Technical Secretariat, it does not provide clarity on the manner of their collaboration and operations.
    • It does not offer clarity on MeitY’s expectations for real-time implementation, leaving stakeholders uncertain about execution and timelines.
    • There is also potential for regulatory overlap among various bodies, creating ambiguity in roles and responsibilities.
    • There is a lack of clear guidelines for cross-border AI governance and the needs of small and medium enterprises (SMEs) have not been addressed.
    • Furthermore, while the AI Governance Report identifies the black box problem and suggests measures for transparency, it does not elaborate on these transparency requirements and the means to mitigate the risks associated with the black box issue.
    • The AI Governance Report does not suggest the possibility of establishing an independent regulatory body or legal framework to handle AI incidents and systems.
    • The discussion on the concerns, such as deepfake, bias, and copyright infringement, highlighted in the AI Governance Report requires a more holistic approach that is at par with the technological developments that are taking place in real time. This discussion should have ideally laid the groundwork for the Inter-Ministerial AI Coordination Committee and the Technical Secretariat to advance it further.

    While the AI Governance Report marks an important step towards establishing a structured AI governance framework in India, its success will largely depend on the effective implementation and the extent of industry collaboration it can foster.

    Links

    Report on AI Governance Guidelines Development: https://indiaai.s3.ap-south-1.amazonaws.com/docs/subcommittee-report-dec26.pdf

    This article was originally published by Saikrishna & Associates