The Ministry of Electronics and Information Technology (MeitY) recently on 15th March, 2024 issued a revised advisory concerning the deployment of artificial intelligence (AI) models by Intermediaries.[i]
This revised advisory was made to supersede a previous advisory issued on March 1, 2024, which, inter alia, required prior and explicit permission from the Government before deploying any AI Technologies which were under testing / unreliable.[ii]
The March 1 advisory drew criticism from various quarters, with concerns raised by startup founders, such as Aravind Srinivas, the CEO of Perplexity, who had called it a “bad move by India” in a social media post on X.[iii]
The revised advisory addresses some of these concerns by removing the obligation of obtaining explicit government permission before deploying AI Technologies.
—-
Key points of the revised advisory include:
View: Does this mean that when a User writes a text prompt that violates IT Rules (thereby uploading unlawful content), it will make the AI Platform liable for violation of this advisory? An answer to this is pertinent particularly when the text prompt entered by the User can never be fully controlled by an AI Platform. Another important point to note is that although the conversation is mostly private, between the User and AI, AI Platforms cannot possibly stop Users from taking screenshots and posting them online.
View: The ambit and definition of the term “integrity of the electoral process” needs to be clarified to assess what sorts of output would be considered as restricted by the advisory. Such as, if a piece of factual information is given about a political leader, would it threaten the electoral process?
View: It is unclear, if the current mechanism used by platforms such as ChatGPT, where they mention the unreliability of its output below the search bar, could be considered as an “equivalent mechanism”, or would they be required to give a “popup” like mechanism. Also, right now it does not seem that any AI Platform can guarantee their technology is fully reliable, especially about the accuracy of facts. This restriction then seems to apply to any and all AI Technologies, increasing compliance costs and use of resources.
View: These guidelines are in line with the obligations placed on Intermediaries under Rule 3(1)(c) of the IT Rules.
View: This is a useful tool to help identify and punish creators of deepfake videos that commit illegal activities or make harmful statements, target certain celebrities or individuals, create misinformation amongst people, and in some cases, commit fraud.
—-
It was clarified that all Intermediaries are required to follow the guidelines and compliances mentioned in this advisory.
However, regarding the legal enforceability of the advisory, MeitY officials have clarified that the advisory serves as a guidance rather than a regulatory framework, emphasizing the need for caution and responsibility in AI deployment.
The effectiveness of such measures, especially given the evolving nature of AI technology, is still to be tested, including its application in the digital landscape.
The guidelines in this advisory are in addition to the guidelines given in the advisory dated 26th December 2023, which mandated intermediaries to communicate clearly and precisely to the Users about the content that is prohibited, particularly those specified under Rule 3(1)(b) of the IT Rules.
This article was first published on IPRMENTLAW