▲知名度法域亦受AI影響

蘇思鴻 律師
發表時間:2024/09/10 18:58 464 次瀏覽

我的文章都是我一個字一個字所寫,而且是我想出來的。
人工智能的發展影響各個層面,知名度亦不能除外。對其有何影響?以人工智能利用藝人、名人臉蛋,生成樣貌酷似藉此利用該名人知名度獲利,顯可疑侵害該名人之知名度;若以人工智能生成該名人之聲音,作商業利用,亦有侵害該名人知名度之虞,故此類法律爭議將會如雨後春筍般迸出。
之前在網路看見川普及拜登被人工智能生成為演唱中文歌的視頻(https://www.youtube.com/watch?v=pkY_ZaNOYoc),可見現今諸多人物可透由人工智能生成非自己親為的事物,例如將女演員的樣貌轉接於色情片,以假亂真的眩惑觀眾,此舉不但侵害藝人的肖像權,亦侵害其知名度,這種情形越來越嚴重。
人工合成深偽影像充斥,如何以現有法律或制定新的法律去規範,為現今法律界所重視。人不會以人工智能合成一個非名人的影像,因為那無意義;人只會去合成一個名人的影像,因為名人才會有知名度大眾才會關注,才會有熱度;有熱度就會有利益。那要如何規範此生成深偽影像?如何救濟?提起救濟要如何計算損害?以上問題在美國已被熱烈討論,我國因無像美國有保護知名度的法律,故討論熱度不高,惟要如何因應,係值得深究。

Some of you may recall that Authors Alliance published our long-awaited guide, Writing About Real People, earlier this year. One of the major topics in the guide is the right of publicity—a right to control use of one’s own identity, particularly in the context of commercial advertising. These issues have been in the news a lot lately as generative AI poses new questions about the scope and application of the right of publicity. 
你們之間有些人可能回想起年初,讓許多人長久以來引頸期盼,作者Alliance所著“編撰有關真實之人”一書。該書其中一篇係有關“知名度”章節 — 利用某人身份同一性權利,特別是商業利用範疇。這些議題為最近新聞常載成人工智能生成就關於知名度適用範圍呈現新問題。

Sound-alikes and the Right of Publicity
聲音酷似與知名度
人工智能時代重要的知名度問題之一是,以人工智能系統生成“聲音酷似”漸漸興起。人工智能生成模型模仿真人聲音,為大眾所注意,例如:人工智能生成歌曲“Heart on My Sleeve”,模仿Drake 與the Weeknd足以使人信服。
One important right of publicity question in the genAI era concerns the increasing prevalence of “sound-alikes” created using generative AI systems. The issue of AI-generated voices that mimicked real people came to the public’s attention with the apparently convincing “Heart on My Sleeve” song, imitating Drake and the Weeknd, and tools that facilitate creating songs imitating popular singers have increased in number and availability

AI-generated soundalikes are a particularly interesting use of this technology when it comes to the right of publicity because one of the seminal right of publicity cases, taught in law schools and mentioned in primers on the topic, concerns a sound-alike from the analog world. In 1986, the Ford Motor Company hired an advertising agency to create a TV commercial. The agency obtained permission to use “Do You Wanna Dance,” a song Bette Midler had famously covered, in its commercial. But when the ad agency approached Midler about actually singing the song for the commercial, she refused. The agency then hired a former backup singer of Midler’s to record the song, apparently asking the singer to imitate Midler’s voice in the recording. A federal court found that this violated Midler’s right of publicity under California law, even though her voice was not actually used. Extending this holding to AI-generated voices seems logical and straightforward—it is not about the precise technology used to create or record the voice, but about the end result the technology is used to achieve. 

Right of Publicity Legislation

名氣權係州法事件。在一些像加州、紐約州等及其他州,係藉由成文法規範名氣權。其本質上,是由習慣法演進或透由法官造法而來。
The right of publicity is a matter of state law. In some states, like California and New York, the right of publicity is established via statute, and in others, it’s a matter of common law (or judge-made law). In recent months, state legislatures have proposed new laws that would codify or expand the right of publicity. Similarly, many have called for the establishment of a federal right of publicity, specifically in the context of harms caused by the rise of generative AI. One driving force behind calls for the establishment of a federal right of publicity is the patchwork nature of state right of publicity laws: in some states, the right of publicity extends only to someone’s name, image, likeness, voice, and signature, but in others, it’s much broader. While AI-generated content and the ways in which it is being used certainly pose new challenges for courts considering right of publicity violations, we are skeptical that new legislation is the best solution. 

In late January, the No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act of 2024 (or “No AI FRAUD Act”) was introduced in the House of Representatives. The No AI FRAUD Act would create a property-like right in one’s voice and likeness, which is transferable to other parties. It targets voice “cloning services” and mentions the “Heart on My Sleeve” controversy specifically. But civil societies and advocates for free expression have raised alarm about the ways in which the bill would make it easier for creators to actually lose control over their own personality rights while also impinging on others’ First Amendment rights due to its overbreadth and the property-like nature of the right it creates. While the No AI FRAUD Act contains language stating that the First Amendment is a defense to liability, it’s unclear how effective this would be in practice (and as we explain in the Writing About Real People Guide, the First Amendment is always a limitation on laws affecting freedom of expression). 

The Right of Publicity and AI-Generated Content知名度與人工智能生成內容

In the past, the right of publicity has been described as “name, image, and likeness” rights. What is interesting about AI-generated content and the right of publicity is that a person’s likeness can be used in a more complete way than ever before. In some cases, both their appearance and voice are imitated, associated with their name, and combined in a way that makes the imitation more convincing. 
過去,名氣權被描述成姓名、圖像、樣貌權。人工智能生成內容令人關注的是什麼與人的樣貌以尤勝於過往之完整方式被利用。在一些情境下,其樣貌及聲音連結其姓名而被模仿,相互結合下使得該模仿更具說服力。

What is different about this iteration of right of publicity questions is the actors behind the production of the soundalikes and imitations, and, to a lesser extent, the harms that might flow from these uses. A recent use of a different celebrity’s likeness in connection with an advertisement is instructive on this point. Earlier this year, advertisements emerged on various platforms featuring an AI-generated Taylor Swift participating in a Le Creuset cookware giveaway. These ads contained two separate layers of deceptiveness: most obviously, that Swift was AI-generated and did not personally appear in the ad, but more bafflingly, that they were not Le Creuset ads at all. The ads were part of a scam whereby users might pay for cookware they would never receive, or enter credit card details which could then be stolen or otherwise used for improper purposes. Compared to more traditional conceptions of advertising, the unfair advantages and harms caused by the use of Swift’s voice and likeness are much more difficult to trace. Taylor Swift’s likeness and voice were appropriated by scammers to trick the public into thinking they were interacting with Le Creuset advertising. 

It may be that the right of publicity as we know it (and as we discuss it in the Writing About Real People Guide) is not well-equipped to deal with these kinds of situations. But it seems to us that codifying the right of publicity in federal law is not the best approach. Just as Bette Midler had a viable claim under California’s right of publicity statute back in 1992, Taylor Swift would likely have a viable claim against Le Creuset if her likeness had been used by that company in connection with commercial advertising. The problem is not the “patchwork of state laws,” but that this kind of doubly-deceptive advertising is not commercial advertising at all. On a practical level, it’s unclear what party could even be sued by this kind of use. Certainly not Le Creuset. And it seems to us unfair to say that the creator of the AI technology sued should be left holding the bag, just because someone used it for fraudulent purposes. The real fraudsters—anonymous but likely not impossible to track down—are the ones who can and should be pursued under existing fraud laws. 

Authors Alliance has said elsewhere that reforms to copyright law cannot be the solution to any and all harms caused by generative AI. The same goes for the intellectual property-like right of publicity. Sensible regulation of platforms, stronger consumer protection laws, and better means of detecting and exposing AI-generated content are possible solutions to the problems that the use of AI-generated celebrity likenesses have brought about. To instead expand intellectual property rights under a federal right of publicity statute risks infringing on our First  Amendment freedoms of speech and expression.
作家聯盟說到:修正著作權法上不足以彌補人工智能生成所造之損害,同樣的問題發生於名氣權。合理的平台規範、嚴格的消費者保護法,優化監測與揭示人工智能生成內容方式,對利用人工智能生成名人樣貌衍生之問題,或許是個解決之道。

蘇思鴻 律師

  • 聯絡電話: 0920235793
  • 執業年資: 5年以上
  • 蘇律師事務所
  • online consulting