April 22, 2024

Bars or Bytes? Exploring the Implications of a Track that Drake Might (or Might Not) Have Created

Did Drake respond to an alleged feud with fellow artists, including Kendrick Lamar and Rick Ross? As reported by the LA Times, a track titled ‘Push Ups (Drop and Give Me 50)’ appeared online recently, taking aim at Lamar and several of his recent musical collaborators. However, this track remained unclaimed on any of Drake’s official platforms, causing some to question whether this track was fan-generated using artificial intelligence. This cynicism may be justified: Drake is reportedly no stranger to having to denounce fan-generated songs, and Lamar’s rumoured response was actually the work of artificial intelligence and another rapper.

Over the last few days, Drake dropped ‘Push Ups’ on streaming services, seemingly ending the debate regarding its source. But Drake raised a new set of interesting questions about the legal implications of AI-generated content when he later released “Taylor Made Freestyle,” another Lamar diss that seemingly features AI-generated vocals from Tupac Shakur and Snoop Dogg.

Discussion

The above examples highlight the difficulties experienced in distinguishing authentic content from that generated by artificial intelligence. In this brief comment, we explore some of the circumstances in which AI-generated content might infringe on the rights of personality, identity, privacy, and/or reputation.

Personality

Although the tort of misappropriation of personality is “well recognized” in Canada, it is less developed than its “right to publicity” analogue in the U.S. Generally speaking, however, it arises where one’s personality has been appropriated for commercial purposes (i.e., “amounting to an invasion of his right to exploit his personality by the use of his image, voice or otherwise with damage to the plaintiff”). Accordingly, so long as an individual has a valuable reputation, the use of that individual’s image (in the case of an AI-generated picture or video) or voice (in the case of an AI-generated song) can be problematic. A court is likely to look at the purpose of the portrayal to determine whether it falls within the ambit of this tort (e.g., a biography where a celebrity is the subject would not expose the creator to liability, in contrast to an activity in which the celebrity is used to endorse or promote a product for commercial gain, which would). In the case of a diss track of uncertain origin that wholesale appropriates the voice of a chart-topping celebrity, the purpose of this portrayal is unlikely to provide safe harbour. Though those who followed the social media spat between Drake and Rick Ross may agree that the old adage of ‘any publicity is good publicity’ rings true, which raises the question whether there is any damage!

Identity / Passing Off

In addition to the misappropriation of personality, someone – like Drake – who develops content as part of their business, could also argue that AI-generated content purporting to be authentic misleads consumers. At the highest level, the tort of passing off and its codification in section 7(b) of the Trademarks Act, exists to protect someone from the harm arising from unfair use of their identity (e.g., pretending that a product originates from that person) and to protect the public from being misled, as to the source of particular goods or services. Much would depend on the nature of the AI-generated content in question and how it is presented; however, it is not outside the realm of possibility that such content could run afoul of passing off (e.g., an AI-generated song held out as from a particular recording artist, competing with that recording artist).

Privacy

Many provinces have recognized statutory or common law invasion of privacy torts. While there is some debate about the scope of such torts, Ontario courts have recognized four distinct ones which might readily apply to the misuse of AI-generated content. Intrusion upon seclusion imposes liability on a person who intentionally intrudes upon the seclusion, private affairs, or private concerns of another person, “where the invasion would be highly offensive to a reasonable person”. No proof of loss or harm is required (but would be compensable, if proven). As such, to the extent AI-generated content disclosed sensitive personal details (e.g., “one's financial or health records, sexual practises and orientation, employment, diary or private correspondence”) or relied on those details to generate such content, liability may arise, but “it is enough if the fact of its publication is offensive” in order to be actionable.

Unfortunately, AI-generated content is itself often used to invade an individual’s privacy and can attract liability on a number of other grounds. “Public disclosure of embarrassing private facts” is actionable in Canada, such that liability may arise where artificial intelligence is used to generate, post, and amplify such content across the internet. Similarly, “publicity which places the plaintiff in a false light in the public eye” is also likely actionable in Ontario and whether on those grounds, or traditional claims of defamation, deep fake content – realistic-looking audio, video and/or images that have been altered or created using artificial intelligence – could attract liability where it is used to portray an individual in a negative light or as a tool to humiliate. Lastly, “appropriation, for the defendant’s advantage, of the plaintiff’s name or likeness” is also actionable in Ontario, such that malicious use of an individual’s personality (as compared to commercial use described above in the context of misappropriation of personality) may also attract liability for AI-generated content in an appropriate case.

Reputation

Canadian Courts recognize several causes of action to remedy falsehoods (e.g., defamation and injurious falsehood). In the commercial context, section 7(c) of the Trademarks Act prohibits certain false or misleading statements against competitors. Where registered trademarks are involved, section 22 of the Trademarks Act prohibits certain uses of well-known marks (or indicia linked thereto) in a manner that depreciates its goodwill. As such, many of the examples canvassed above for Personality, Passing Off and Privacy – which by their nature constitute a falsehood – may also attract liability under reputation-related torts.

Takeaways

There is a need for those in the creative and tech industries to understand the legal implications of AI-generated content. The questions of AI-generated content raised in the ongoing rap feud between Drake and Lamar highlight broader challenges likely to come before our Courts – contending with AI-generated content that engages several aspects of the law at any given time, from personality to privacy. If AI sets the rhythm for tomorrow's tracks, the law must keep pace – without skipping a beat.

Update

OpenAI recently introduced a voice for its ChatGPT product that some people say sounds "eerily similar" to Scarlett Johansson's voice. This follows Johansson's refusal of an offer from OpenAI to use her voice. We continue our discussion on AI-generated voice and explore the legal implications of this situation here.

This is Part 2 of our 5-Part Series on AI in the Courtroom, which includes the below blogs.