A retrospective on the year now coming to a close is what one expects this time of year, so I will try not to disappoint. However, when I look back at the copyright developments I wrote about in 2024, the dominant issues that jump out are AI, AI and AI. You can’t read or think about copyright without Artificial Intelligence, or to be more correct, Generative Artificial Intelligence (GAI), occupying most of the space despite many other issues on the copyright agenda. The mantra of “AI, AI and AI”, as in “Location, Location and Location” is apt because there are at least three important copyright dimensions related to AI; training of AI models; copyright protection for outputs generated by AI; and infringement of copyright by works created with or by AI. Of the three, the use of copyrighted content for AI training is the most salient.
Last year in my year-ender, I also discussed AI and the numerous lawsuits that were emerging as rightsholders pushed back on having their content vacuumed up by AI developers to train their algorithms. Those lawsuits have only multiplied. At last count, there are more that 30 cases in the US, ranging from big media vs big AI (New York Times v OpenAI/Microsoft) to class action suits brought by artists and authors, as well as litigation in the UK, EU, and now in Canada (see here and here). That is just on the input side.
In terms of output, i.e. whether works produced by an AI can be copyrighted, there are a couple of interesting cases in the US where applications for copyright registration have been refused by the US Copyright Office (USCO) because of a lack of human creativity. A couple of months ago, I discussed two such high profile cases, one brought by Stephen Thaler, and the other by Jason Allen. To date the USCO is not budging, although it is undertaking an extensive study of the issue. Part 1 of its study, on digital replicas, was published in July of this year. The next section on copyrightability is expected to be published in January with the issues of ingestion for training and licensing in Q1 2025.
While the USCO has to date denied applications for copyright registration of AI-generated works, the Canadian copyright office (CIPO-Canadian Intellectual Property Office) has been caught up in a problem of its own making. This is because Canadian copyright registration is granted automatically, so long as tombstone data and the prescribed fee is provided. The work for which registration is sought is not examined. As a result, copyright certificates have been issued to works created by AI, notwithstanding the general presumption that copyright protection is only accorded to human created work (although this is not explicitly stated in the Act). In July a legal challenge was launched against copyright registrant Ankit Sahni, who successfully registered a work with CIPO claiming an AI as co-author. The case was brought by the Canadian Internet Policy and Public Interest Clinic (CIPPIC) at the University of Ottawa, as I wrote about here. (Canadian Copyright Registration and AI-Created Works: It’s Time to Close the Loophole).
While the courts in the US, UK, Canada and elsewhere are grappling with various issues related to AI and copyright, governments are studying the issue.
In Australia, the Select Committee on Adopting Artificial Intelligence issued its final report in November. While the report was wide-ranging, three of its recommendations related to copyright;
• engagement with the creative Industry to address unauthorized use of their works by AI developers and tech companies,
• transparency in Training Data by requiring AI developers to disclose the use of copyrighted works in training datasets and ensure proper licensing and payment for these works, and
• remuneration for AI Outputs, with an appropriate mechanism to be determined through further consultation
These are important principles, but how they will be implemented in practice remains to be determined.
In Canada, a consultation on AI and copyright was launched late in 2023 with submissions to be received by January 15, 2024. The Canadian cultural community put forth three key demands;
• No weakening of copyright protection for works currently protected (i.e. no exception for text and data mining to use copyrighted works without authorization to train AI systems)
• Copyright must continue to protect only works created by humans (AI generated works should not qualify)
• AI developers should be required to be transparent and disclose what works have been ingested as part of the training process (transparency and disclosure).
Submissions to the consultation were published in mid-year but since then there has been no apparent action. Given the current political crisis facing the Trudeau government, none is expected in the near term although the issue will inevitably have to be addressed after the general election in 2025.
While the EU has already established some parameters dealing with use of copyrighted materials for AI training, the new UK Labour government is taking another run at the issue after various proposals in Britain to find a modus vivendi between the AI and content industries under the Tories went nowhere. The current UK discussion paper on Copyright and Artificial Intelligence, which seems excessively tilted in favour of the AI industry, has aroused plenty of controversy. While it says some of the right things, such as proclaiming that one of the objectives of the consultation is to “support…right holders’ control of their content and ability to be remunerated for its use” the thrust of the paper is to find ways to encourage the AI industry to undertake more research in the UK by establishing a more permissive regime with respect to use of copyrighted content. It is based on three self-declared principles; (notice how these things always seem to come in threes?);
• Control: Right holders should have control over, and be able to license and seek remuneration for, the use of their content by AI models
• Access: AI developers should be able to access and use large volumes of online content to train their models easily, lawfully and without infringing copyright, and
• Transparency: The copyright framework should be clear and make sense to its users, with greater transparency about works used to train AI models, and their outputs.
These three objectives then lead to what is clearly the preferred solution;
“A data mining exception which allows right holders to reserve their rights, underpinned by supporting measures on transparency”
Fine in principle, but the devil is always in the detail and the details in this case revolve around transparency (how detailed, what form, what about content already taken?) and, in particular, reservation of rights, aka “opting out”. This is easy to proclaim in principle but difficult to do in practice. British creators are up in arms, led by artists such as Paul McCartney, and supported by the creative industries in the US. The British composer Ed Newton-Rex has penned a brilliant satire explaining how AI development in the UK will work if current proposal is enacted. The problem with an opt-out solution is essentially twofold; it doesn’t deal with content already absorbed by AI developers and it would be cumbersome if not impossible for many rightsholders to use.
Other governments have addressed the issue in different ways. Singapore has taken a very loose approach toward copyright protection, putting its thumb firmly on the scale in favour of AI developers. It is currently considering additional proposals that would strip even more protection from rights-holders, who are pushing back strongly. Japan had been widely and incorrectly reported to have been on the same path, resulting in a welcome clarification this year from the Agency for Cultural Affairs regarding the limits of Japan’s text and data mining (TDM) exception.
While AI dominated the copyright agenda in 2024, there were other issues relating to copyright and copyright industries that I wrote about. The ongoing question of payment for news content by large digital platforms continued to play out in different ways. In Canada, the struggle between the government and US tech giants Google and META was finally “resolved” (after a fashion) at the end of last year. Google agreed to “voluntarily” pay $100 million annually into a fund for Canadian journalism in return for being exempted from the Online News Act (ONA) while META called the government’s bluff by blocking Canadian news providers from its platform thus, in theory, avoiding being subject to the ONA. However, META has a very subjective interpretation as to what is Canadian news content, allowing some news providers to post to it, while many users have found workarounds, as documented by McGill’s Media Ecosystem Observatory. While the CRTC investigated, the issue is still unresolved.
Meanwhile in Australia, it seems that META intends to go down the same road of blocking news, announcing it will not renew the content deals it initially signed with Australian media in response to Australia’s News Media Bargaining Code, the model upon which Canada’s legislation was based. Unlike in Canada, the Australian government is planning a robust response. (More on this in a future blog post). Finally, on the same topic, California (which was threatening to introduce its own version of legislation to require digital platforms to compensate news content providers) emerged with an outcome very similar to that reached in Canada, with Google offering up some funding (although proportionally less than in Canada) while META appears to have walked away.
Controlled Digital Lending (CDL) was another copyright issue finally settled in 2024 (in the US). The Internet Archive, after losing a lawsuit brought against it by a consortium of publishers who argued that the digital copying of their works constituted copyright infringement, notwithstanding the Archive’s theory that they were simply lending a digital version of a legally obtained physical work held by them (or someone else associated with them), lost its appeal. In December, the deadline for further appeals expired, thus effectively ending this saga. Whether Canadian university libraries, some of whom are avid devotees of CDL, will take note remains to be seen.
The issue of circumventing a TPM (“Technological Protection Measure”), commonly referred to as a “digital lock” and often represented by a password allowing access to content behind a paywall, was also front and centre this year in Canada. In the case of Blacklock’s Reporter v Attorney General for Canada, the Federal Court found that an employee of Parks Canada, who shared a single subscription to Blacklock’s with a number of other employees by providing them with the password did not infringe Blacklock’s copyright since the employee did not circumvent (in the meaning of the law) the TPM and the purpose of the sharing was for “research“, which is a specified fair dealing purpose. Blacklock’s is a digital research service that sells access to its content and protects its content with a paywall, as is common for many online content providers, like magazines and newspapers.
Despite the hoo-ha of anti-copyright commentators asserting the Court had found that “digital lock rules do not trump fair dealing“, it was equally clear the Court had ruled that fair dealing does not trump digital locks (TPMs). The Court did not undermine the protection afforded to businesses to protect their content through use of TPMs. Rather, it determined that sharing a licitly obtained password did not constitute circumvention as outlined in the Act, as I explained here. (Fair Dealing, Passwords and Technological Protection Measures (TPMs) in Canada: Federal Court Confirms Fair Dealing Does Not Trump TPMs (Digital Lock Rules). Although the Court did not legitimize circumvention of a TPM for fair dealing purposes, contrary to claims stating the opposite, its acceptance of password sharing is an outcome that legal experts have disagreed with, (as do I for what it is worth). The law is very clear that fair dealing cannot be used as a pretext or a defence against violation of the anti-circumvention provisions of the Copyright Act. The decision now under appeal by Blacklock’s.
Finally, the last copyright point of note for 2024 is that this year marked the bicentenary of the introduction of the first copyright legislation in Canada, in the Assembly of Lower Canada, in 1824. It also marked the centenary of the entry in force of the first truly Canadian Copyright Act on January 1, 1924. This two hundred years of domestic copyright history is worth celebrating. The first legislation was introduced “for the Encouragement of Learning” so that more local school texts would be written and printed. Given the current standoff between the secondary and post-secondary educational establishment and Canadian authors and their copyright collective over license payments for use of copyrighted works in teaching, one wonders whether we have really learned anything about the role copyright plays in our society. (Copyright and Education in Canada: Have We Learned Nothing in the Past Two Centuries? (From the “Encouragement of Learning” to the “Great Education Free Ride”).
Leaving that question with you to ponder, gentle Reader, is probably a good way to end this look back over the past 12 months. Stay tuned for more commentary on copyright developments in 2025.
This article was originally published by Hugh Stephens Blog