AI and Copyright: What the Anthropic Decision Means for Musicians, Labels, and Publishers
In June 2025, a U.S. federal court released a major decision involving Anthropic PBC, a leading developer of generative artificial intelligence (AI) systems. The case examined whether training AI models using copyrighted materials without permission can qualify as “fair use” under U.S. law.
Although the ruling applies to American copyright law, it raises important questions for Canada’s music industry. For musicians, labels, and publishers, the decision signals how courts may approach AI training on songs, lyrics, and recordings.
What Happened
A group of authors sued Anthropic, claiming the company used their books to train its AI model without authorization. The court distinguished between materials that were lawfully obtained and those that were unlawfully sourced.
It found that training an AI model on books that were legally purchased or licensed could, in some circumstances, qualify as “fair use” under U.S. copyright law. The reasoning was that the model used the works to learn language patterns rather than to reproduce them directly.
By contrast, the court indicated that using materials acquired through unauthorized or illegal sources would not fall within that protection. Those issues remain before the court and could result in further findings of infringement.
For those in the music industry, this case indicates that the source of training data is an important factor in determining whether the use of copyrighted material to train AI models is legally defensible.
Why the Anthropic Decision Matters for the Music Industry
The decision does not directly involve music, but its reasoning will likely influence future AI and copyright disputes affecting creators, publishers, and rights-holders.
1. Creative content may be used to train AI models without consent in some jurisdictions and not others.
The U.S. court’s reasoning suggests that AI developers may, in certain circumstances, train models on lawfully obtained creative works without first seeking permission. Under U.S. “fair use,” that training can be considered transformative and therefore permitted.
Conversely, in Canada, the question of whether or not using copyrighted works to train AI models constitutes infringement is currently an open legal question. However, the Copyright Act limits “fair dealing” to specific purposes such as research, criticism, or parody, and training an AI model on music or lyrics does not clearly fit within those categories. Unless training materials are lawfully licenced, any such use likely constitutes substantial reproduction requiring that authorization from rights-holders.
2. Training and output raise separate copyright risks.
The Anthropic decision focused on the legality of using copyrighted works for AI training but did not address whether AI outputs could infringe copyright. This raises distinct questions: whether AI-generated content can itself be copyright protected, and whether such outputs might infringe existing works. In both the United States and Canada, if an AI model generates lyrics, melodies, or recordings that closely resemble existing works, rights-holders could argue substantial reproduction has occurred. Notably, establishing the “access” requirement for infringement should be straightforward, as it can be determined whether the allegedly infringed work was part of the AI’s training dataset.
3. Licensing and data sourcing will become central business issues.
The U.S. court’s reasoning suggests that AI developers may, in certain circumstances, train models on lawfully obtained creative works without first seeking permission. Under U.S. “fair use,” that training can be considered transformative and therefore permitted.
Conversely, in Canada, the question of whether or not using copyrighted works to train AI models constitutes infringement is currently an open legal question. However, the Copyright Act limits “fair dealing” to specific purposes such as research, criticism, or parody, and training an AI model on music or lyrics does not clearly fit within those categories. Unless training materials are lawfully licenced, any such use likely constitutes substantial reproduction requiring that authorization from rights-holders.
The Anthropic decision focused on the legality of using copyrighted works for AI training but did not address whether AI outputs could infringe copyright. This raises distinct questions: whether AI-generated content can itself be copyright protected, and whether such outputs might infringe existing works. In both the United States and Canada, if an AI model generates lyrics, melodies, or recordings that closely resemble existing works, rights-holders could argue substantial reproduction has occurred. Notably, establishing the “access” requirement for infringement should be straightforward, as it can be determined whether the allegedly infringed work was part of the AI’s training dataset.
4. Licensing and data sourcing will become central business issues.
The Anthropic decision reinforces the growing importance of lawful sourcing in AI training. For the music industry, this development signals that questions around licensing and consent will continue to evolve. Rights-holders may wish to consider how their catalogues are being used and whether their existing agreements address AI-related activity. Similarly, AI developers should be mindful that the absence of clear licensing frameworks may present legal and reputational risk as the law continues to develop.
Practical Steps for Musicians, Labels, and Publishers
1. Audit your catalogue.
Rights-holders should identify whether their songs, lyrics, or recordings may appear in datasets used for AI training. New tools are emerging that can help trace creative works within training materials and monitor unauthorized use.
2. Review your contracts.
Examine songwriter, producer, and recording agreements to determine whether they address AI training or derivative use. Many existing agreements are silent on this issue. Consider whether future contracts should include express language clarifying if and how AI training is permitted.
3. Establish a clear AI policy.
Develop a clear position on how your works can be used in connection with AI. Some rights-holders prefer to prohibit all AI training without explicit authorization, while others are open to structured licensing or collaboration. Having an internal or published policy helps ensure consistency when new requests or opportunities arise.
4. Monitor Canadian legal developments.
The federal government has begun consulting on how generative AI interacts with copyright law. Potential legislative changes may determine whether AI training can be considered fair dealing or will require explicit licensing. Staying informed will help rights-holders adapt their approach as the legal landscape evolves.
The Bottom Line
The Anthropic decision is the first major ruling to consider how copyright law applies to AI model training. While the U.S. court found some uses could qualify as fair use, Canadian law remains stricter. For Canadian musicians, publishers, and labels, this means there is still significant legal uncertainty and opportunity to shape how AI interacts with music rights.
Discussions on these issues are already underway in Canada. Industry groups, including the Canadian Music Publishers Association, have participated in consultations and lobbying efforts in Ottawa to ensure that any future legislative changes reflect the interests of creators and rights-holders.
The key takeaway is to be proactive. Understand how your catalogue may be used, make your expectations clear in contracts, and stay informed about upcoming changes to Canadian copyright law. The relationship between AI and music is evolving quickly, and those who act early will be best positioned to protect their creative work.
For advice on protecting your catalogue, negotiating AI-related licensing terms, or understanding how AI and copyright law affect your business, contact John Graham or Matthew Gorman of Cox & Palmer’s Entertainment Law Group.