Subscribe to get latest news delivered straight to your inbox


    Emerging Legal Issues With The Use Of Generative AI

    • 13.12.2023
    • By Tanisha Khanna and Gowree Gokhale
    Nishith Desai Associates

    Generative artificial intelligence (“AI”) tools have disrupted several industries that have hitherto exclusively relied on humans to create works, or to make decisions. The use of tools like ChatGPT, AIVA, Jukebox, Google Bard, are increasingly finding application in entertainment, IT/ITeS, healthcare, education, real estate, law enforcement among others. Open AI has recently launched ChatGPT Enterprise, a version of ChatGPT for business application1. The lower cost, higher speed and efficacy are attractive reasons for businesses to use such tools.

    There are several legal aspects to consider while relying on generative artificial intelligence (“AI”) tools, such as ownership of intellectual property in works, possible infringement of third-party intellectual property, personality rights issues, generation of unlawful content by such tools, bias and prejudice in decision-making by such tools, among others. In this article, we will discuss these considerations under Indian law.

    These Generative AI tools are trained on large datasets. Hence, such data sets form the basis or input for the output of Gen AI tools. For instance, Open AI trains tools like ChatGPT through large language models (LLM) on a broad corpus of texts. AIVA has been trained on a large database of classical partitions written by famous composers such as Bach, Beethoven, Mozart, etc., based on which it creates its own sheet music.2

    When humans provide instructions, or inputs/ prompts (such as What is the time difference between India and UK), the tool generates a response / output from the tool (The time difference varies between 4 and a half hours or 5 and a half hours, depending upon daylight saving time changes).

    Intellectual Property Rights

    Under the Indian Copyright Act, 1957 (“Copyright Act”), copyright subsists in certain works, (i) original literary, dramatic, musical and artistic works3, (ii) cinematograph films4, and (iii) sound recordings5. The Copyright Act grants owners such works certain exclusive rights in relation to such works6. While using or relying on the output of generative AI tools, it must be ensured that it does not infringe third party rights. An interesting question emerges as to whether copyright subsists in the output provided by the Gen AI tool.

    Copyright in input and output

    The copyright protection does not extend to ideas, concepts, or themes, but rather to the expression of that idea in a tangible form.7  If an input is simply an idea, it is difficult to argue that copyright subsists in such input. Hence, in most cases, it must be evaluated whether copyright subsists in the resulting expression, i.e., output.

    Copyright subsists only in original literary, dramatic, musical and artistic works. The work should not be copied from some other work and should originate from an author, and there must be a ‘minimal degree of creativity’8 involved in the creation of such work.

    There is an ongoing debate with respect to whether, through training of tools on large datasets without licenses, the resulting output (based on such datasets) infringes third party copyright.9 Some platforms have already taken steps to protect their data from being harvested through web crawlers. For instance, certain Indian news publishers have blocked access to Open AI’s web crawler to protect their content.10

    Whether the resulting output, is an original work, or it infringes copyright is a question of facts and circumstances of each case. For instance, if an output reproduces or very closely resembles an image / content on which it is trained, it may amount to reproducing the work in any form11, and hence the risk of copyright infringement would be greater.

    Authorship

    There must be an ‘author’ of a work for copyright to subsist under the Copyright Act. The author is typically the first owner of the work (subject to contractual arrangements such as assignment or commissioning agreements).

    Under the Copyright Act, an ‘author’ in relation to any literary, dramatic, musical, or artistic work which is computer – generated, is the person who causes the work to be created12. However, basis the provisions of the Copyright Act, it does not appear that ‘computer generated’ works include works created using generative AI tools. The question also arises whether artificial persons such as AI tools can be authors of works under the Copyright Act. The provisions of the Copyright Act and jurisprudence13 suggests that the Act contemplates only natural persons as authors of works.

    A United States District Court recently considered14 whether a work generated entirely by a generative AI system would be eligible for copyright registration. The court upheld the Copyright Office’s decision to deny copyright registration to such work, as it lacked the human authorship necessary to support a copyright claim. The court held that human creativity was at the core of copyrightability, even as human creativity was channelled through new tools or into new media. Human authorship, i.e., a human with the capacity of intellectual, creative or artistic labour, was the bedrock requirement of copyright.

    Till the ambiguity on authorship and ownership of Gen AI generated work is resolved, it is important to assess the terms and conditions of the tools, which govern IP ownership and use. Some terms of use assign copyright in the ultimate output to the user15, while others simply grant a non-commercial or limited license16.

    Recently, Reuters issued a memorandum17 to their journalists regarding the use of generative AI to create news content, which cautioned them that the use of such tools would complicate their ability to protect Reuters’ intellectual property rights. The memorandum specifically mentioned that the terms of some tools required users to ‘relinquish legal rights to content, and some countries view AI-generated content as not copyrightable.’ The memorandum also appeared to place the onus on the reports and editors for content produced using generative AI.

    Personality Rights and Defamation

    Under Indian law, personality rights / publicity rights are not recognized under any statute, and have evolved under common law through court judgments. The right of publicity has been recognized as the right to control commercial use of human identity18, which would encompass an individual’s likeness, features, voice, signature, or other attributes. Any unauthorized use of such features leading to commercial gain, such as to indicate false endorsement by the celebrity,19 can be considered an infringement of personality rights.

    Generative AI tools have frequently been used to create works featuring celebrities and aspects of their personality. Recently, images depicting Bollywood actresses as Barbie were created using Midjourney, and music featuring Rihanna singing Beyonce’s song created using ChatGPT.20 Deepfake songs have also been generated, by generating an artist’s voice to sing another artist’s songs, including deceased artists. Platforms like Google and Meta are exploring ways to license artist’s voices for AI generated songs, from music labels, in exchange for royalty payments.21

    In addition to possible infringement issues, any commercial use of such works featuring the attributes of celebrities, or indicating that it is endorsed by a celebrity, would be subject to risk of personality rights claims as well. In addition, if such use is in a manner which would be insulting to the individual, or negatively affect their reputation, there may be a risk of claims of defamation by such celebrities as well.

    Decision making, Bias, Prejudice, and Stereotypes

    Another emerging issue is the dependency on human decision making on Gen AI output. The question which arises, is whether humans can completely delegate their decision making to generative AI tools in several public functions / professions. Generally, the trend may emerge that while humans may rely on the tools for initial output, complete delegation should be avoided. Separately, there is a possibility that generative AI tools may generate biased outcome. If such tools are trained on biased, incorrect or flawed data sets, their output may be biased or flawed. For instance, facial recognition tools may not be able to identify persons of different races if they have only been trained on a dataset of one racial group. Hence, such tools can perpetuate racial or gender biases and stereotypes.

    A lawsuit before a US district court22 has been filed against Workday, a software company that develops AI applications for Human Resources (HR) decision making. The lawsuit alleges that the AI software enables hiring discrimination against people of his profile (disabled, of a particular race, and over a particular age).

    Organizations that rely on the output of AI tools to make decisions about people must be cognizant of such issues, and implement tools to enable transparency and accountability in decision making process.

    Generation of unlawful Content

    There are increasing instances of use of generative AI tools to create unlawful content like fake news/misinformation, or deepfakes, such as phone scams carried out by cloning voices of known people to carry out fraudulent activities23, or IP infringing content.

    It appears that human oversight is still required to check the accuracy and overall legality of content. The memorandum issued by Reuters24 required journalists to ensure that results generated using AI met the required standards for quality, accuracy, and reliability with ‘rigorous oversight by newsroom editors,’ It also advised them to transparently disclose reliance on generative AI in content, and check facts, bias, and correct errors prior to publication, while using such tools.

    The question of liability for such content, i.e., whether it is the human providing input, or the developers of the platform/tool, is unclear. Under Indian law, certain entities termed ‘intermediaries’, essentially passive transmitters of content, are provided immunity or ‘safe harbour’ from liability for such content, if they satisfy certain conditions25. Intermediaries must not, among others, select or modify the information contained in transmissions.

    In case of imposition of any liability for content generated by such tools, it is possible that developers of such platforms may argue such platforms are intermediaries and entitled to safe harbor, subject to fulfillment of such conditions.

    The Government of India has indicated26 that the proposed Digital India Act will classify different intermediaries and impose varied responsibilities on them basis their business models. It may be that generative AI tools may be an independent category of intermediaries which are separately regulated under the Digital Indian Act.

    This article was originally published by Nishith Desai Associates.