[email protected]

Metaverse Law featured in OC Lawyer Magazine

The Orange County Bar Association recently released the January 2024 issue of Orange County Lawyer magazine. This month, Orange County Lawyer includes an article written by Metaverse Law’s Lily Li.

Read “AI Generated Deepfakes: Potential Liability and Remedies” below or in Orange County Lawyer magazine.

 

[Originally published as a Feature Article: AI-Generated Deepfakes: Potential Liability and Remedies, by Lily Li, in Orange County Lawyer Magazine, January 2024, Vol. 66 No.1, page 26.]

AI-Generated Deepfakes: Potential Liability and Remedies

 

by Lily Li

 

Almost ten years ago, in Netflix’s hit series House of Cards, the Underwoods’ presidential bid is almost derailed by a leaked picture of an affair, nude shower scene and all. While the picture was real, the Underwoods were able to undermine the credibility of the leaked image by claiming it was fake—going so far as to recreate the image using a hired model, to show how “easy” it was to fabricate photos.

This episode, aptly named “The Road to Power,” highlights one of the greatest risks of disinformation and fake or synthetic media. It is not through the public’s gullibility to doctored images; it is the watering down of trust in online media, leading individuals to rely solely on friends, family, and other sources of information that echo their own beliefs and values.

Fast forward a decade, and synthetic media—also known as “deepfakes” –-are now pervasive. In early 2022, for example, a fake video of Ukrainian President Volodymyr Zelensky circulated on social media, calling for his soldiers to lay down their arms and surrender to Russia.[1] At the corporate level, deepfakes have been used to mimic a CEO’s voice to fraudulently transfer $243,000.[2] Just as troubling, and even more creepy, a “sophisticated hacking team” impersonated the CEO of cryptocurrency company Binance by using “video footage of his past TV appearances and digitally alter[ing] it to make an ‘AI hologram’ of him and trick people into meetings.”[3] At home, scammers can use deepfaked voices to mimic loved ones, or AI-powered chatbots to engage in romance scams via text messages and phone calls. This is just a front to ask the victim to wire money, send gift cards, or reveal personal information to engage in identity theft. The problem has become so severe that both the FTC and the FCC have released consumer alerts in early 2023 regarding these AI-generated scams.[4]

The ease in which generative AI can create realistic videos, voice, and text will only aggravate these concerns. Deepfakes have long relied on machine learning to iterate and become more realistic with training, but in the past, this type of technology required significant computing resources and time. Now, almost every tech product is incorporating generative AI or machine learning in some form, making this accessible to every novice programmer or script kiddie.

Given these growing risks, this article will focus on the potential liability that creators, platforms, and publishers face in creating and spreading deepfakes, as well as the challenges of pursuing remedies under existing laws. In addition, this article will discuss pending rulemaking governing deepfakes and potential steps forward.

 

Privacy Liability for Deepfakes

Biometrics: If deepfakes rely on scans of faceprints, facial geometry, or voiceprints to make the false video or audio, or even to train their algorithms, then biometric privacy laws may apply. The Illinois Biometric Information Privacy Act (BIPA) is one of the strictest data privacy laws in the country. It requires express written consent and meaningful disclosures prior to any use and disclosure of Illinois resident biometric data. The collection of biometric data is interpreted broadly to include faceprints and voiceprints. It provides a private right of action, up to $5,000 in statutory damages per violation, and does not require a showing of harm.[5] Earlier this year, in Cothron v. White Castle Systems, Inc.,[6] the Illinois Supreme Court went even further, confirming that each scan in violation of BIPA counts as an ongoing violation—adding further teeth to this law.

Revenge Porn Laws: To the extent the deepfakes include pornographic images, several states, like Virginia,[7] have explicitly included deepfakes within “revenge porn” laws, while other victims have pursued claims under existing revenge porn laws by claiming that the deepfakes amount to non-consensual pornography. The legal consequences vary by jurisdiction, ranging from misdemeanors to felonies with fines and jail time. New York and California also provide a private right of action for deepfake pornography.

General Data Protection Regulation (GDPR): The EU has a broad privacy law that governs use of personal data. Unlike U.S. state privacy laws, which generally allow free use of publicly available data (except for biometric processing), the EU requires all individuals, companies, and non-profits to have a lawful basis for processing any personal data—with limited exclusions for personal data “manifestly made public by the data subject.” Thus, indiscriminate scraping of social media data for deepfakes, especially where the users have limited the audience for their data, would likely violate the GDPR and be subject to fines and regulatory scrutiny.

 

IP, Torts, and other Remedies

Defamation: Traditional defamation claims are also applicable to deepfakes, if the plaintiff can show that the deepfake is communicated to third parties and makes false assertions that harms the plaintiff’s reputation. For public figures, plaintiffs must also show malice.

Rights of Publicity: Many states recognize a “right of publicity” to an individual’s voice or image. The damages or royalties from a right to publicity claim are proportionate to the value associated with licensing one’s image, so these types of claims are more appropriate for celebrities that ordinarily profit from licensing their image.

Copyright and Trademark: To the extent deepfakes use existing logos, photos, music, or even unique website designs to make them seem official or legitimate, this may support multiple claims of copyright and trademark infringement. Copyright holders may also send copyright takedown notices under the DMCA for infringing conduct.

Breach of Contract: If deepfakes rely on scraped content from existing sites or platforms, this may also support a breach of contract claim against the offending party (to the extent they’ve signed up and agreed to the platform’s rules). For example, in the widely publicized case, hiQ Labs, Inc. v. LinkedIn Corp., the Ninth Circuit found that hiQ breached LinkedIn’s User Agreement both through its own scraping of LinkedIn’s site and through its use of independent contractors to log into LinkedIn and do quality control of the data.[8] The Ninth Circuit noted, however, that LinkedIn was estopped from pursuing certain claims due to how much time had elapsed since its initial awareness of data scraping. Consequently, platforms that wish to rely on breach of contract claims to combat data scrapers, and potential misuse of their platforms for generative AI and deepfakes, must act swiftly and definitively. This is likely the impetus for X Corp’s (formerly Twitter) recent slew of crackdown on data scrapers, through a series of lawsuits filed in August.[9]

State Deepfake Laws: California, Texas, and Virginia have also enacted deepfake laws specific to political deepfakes, but these laws are limited in application and remedy. Texas SB 751, for instance, prohibits deepfake videos created “with intent to injure a candidate or influence the result of an election” and which are “published and distributed within thirty days of an election.” This law makes violations a Class A misdemeanor punishable by up to a year in jail and fines up to $4,000. More recently, Washington State passed a law requiring clear and transparent notices on any synthetic video or audio concerning candidates if it is related to an election. Senate Bill 5152 gives candidates a private right of action, including attorney’s fees for the prevailing party.

 

Limitations of Existing Remedies; Section 230 of the Communication Decency Act

There are several hurdles that would-be plaintiffs face in pursuing deepfake claims. For many torts like defamation and right of publicity, the amount of damages may be limited compared to the cost of litigation, and important First Amendment rights protect non-commercial speech that is satirical or political commentary. In addition, deepfake content can easily cross borders, so it may be difficult to find a defendant to penalize or enjoin. Consequently, instead of pursuing traditional claims, many victims rely solely on IP takedown notices, or a social media platform’s own processes to flag and remove deepfake content.

At present, Section 230 of the Communications Decency Act also shields platforms from liability for the content users upload and distribute on their platforms, as platforms generally do not constitute the “speaker” or “publisher” of such content. The line between acting as a pure platform, and contributing or generating harmful content, is increasingly blurred, however. In the recent Supreme Court case, Twitter, Inc. v. Taamneh et al,[10] plaintiffs alleged that social media platforms profited from ISIS recruitment videos and allowed ISIS to take advantage of the social media platforms’ “recommendation” algorithms that match content. While the Supreme Court declined to address the scope of 230 protections for these types of “recommendation” algorithms—the Supreme court noted that Section 230 may not protect platforms that create text, audio, or video through generative AI. In oral arguments to Google v. Gonzales, a companion case to Taamneh, Justice Gorsuch strongly implied that generative AI would fall outside of Section 230’s protections, stating: “I mean, artificial intelligence generates poetry, it generates polemics today. That—that would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected. Let’s—let’s assume that’s right, okay?”[11]

Going forward, we anticipate that the Illinois Biometric Information Privacy Act, and pending bills on biometric data, will likely be a more promising and lucrative way to attack platforms that explicitly use biometric data to generate or share deepfakes. In addition, as noted above, plaintiffs may have more luck pursuing claims against platforms that help create deepfake content or media using generative AI rather than solely relying on user content.

 

Do We Need Additional Laws?

As we can see from the patchwork of common law and statutory rights, the potential risks for creating and publishing deepfakes is many, but the best avenue for plaintiffs to pursue a remedy is unclear. Even some regulators are scratching their heads as to whether existing rules apply to deepfakes. For example, in July 2023, Public Citizen filed a petition with the Federal Election Commission (FEC), asking the FEC to amend its regulation on “fraudulent misrepresentation” at 11 C.F.R. § 110.16[12] to clarify that “the restrictions and penalties of the law and the Code of Regulations are applicable” should “candidates or their agents fraudulently misrepresent other candidates or political parties through deliberately false [AI]-generated content in campaign ads or other communications.”[13] In response, the FEC submitted a notice, soliciting public comment on this issue before making a decision on the merits of the petition.

The FTC has taken a firmer stance, stating that it does have authority to regulate AI generally, and deepfakes more specifically. In a March 2023 blog post titled “Chatbots, deepfakes, and voice clones: AI deception for sale,” the FTC noted that the “FTC Act’s prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive—even if that’s not its intended or sole purpose.”[14]

Abroad, the European Union is taking an entirely different approach, developing a comprehensive law (the EU “AI Act”) that would govern artificial intelligence as a whole. The law, as drafted, requires all high-risk AI processing to undergo risk assessments for bias, safety, accuracy, and other risks. In addition, the AI Act would require transparency obligations for deepfakes, defined as “AI systems that generate or manipulate image, audio or video content.”[15] While the AI Act is still in draft form, it is likely to have as large and wide sweeping of an impact as the General Data Privacy Regulation, once it goes into effect.

Given the existing plethora of rights and remedies under the law, and the potential impact of the EU AI Act, this author does not believe that this is the right time to pursue a federal law specific to deepfakes—even though they present serious threats. In the current divisive political climate, it is likely that any proposed law will either get blocked, watered down, or if passed—fail to strike the right balance between free speech and misleading content. Instead, courts and regulators should strictly enforce existing laws that protect individual privacy and image rights, and the right to be free from false and deceptive practices. Attorneys should advise their tech clients on the risks of generative AI technologies and the potential gaps in Section 230 coverage. Finally, as private citizens, let’s remain diligent in what we read and share—and not be afraid to call out anyone who seeks to deceive.

 

ENDNOTES

(1) Bobby Allyn, Deepfake video of Zelenskyy could be ’tip of the iceberg’ in info war, experts warn, NPR (Mar. 16, 2022, 8:26 PM), https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia.

(2) Catherine Stupp, Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case, Wallstreet Journal (Aug. 30, 2019, 12:52 PM),  https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402.

(3) Luke Hurst, Binance executive says scammers created deepfake ’hologram’ of him to trick crypto developers, Euronews (Aug. 24, 2022, 2:47 PM), https://www.euronews.com/next/2022/08/24/binance-executive-says-scammers-created-deepfake-hologram-of-him-to-trick-crypto-developer.

(4) Alvaro Puig, Scammers use AI to enhance their family emergency schemes, Federal Trade Commission (Mar. 20, 2023), https://consumer.ftc.gov/consumer-alerts/2023/03/scammers-use-ai-enhance-their-family-emergency-schemes; ’Grandparent’ Scams Get More Sophisticated, Federal Communications Commission, https://www.fcc.gov/grandparent-scams-get-more-sophisticated (last visited Nov. 29, 2023).

(5) See Rosenbach v. Six Flags Entertainment Corp., 2019 IL 123186 (Jan. 25, 2019).

(6) 2023 IL 128004 (Feb. 17, 2023).

(7) Va. Code Ann. § 18.2-386.2.

(8) No. 17-3301 (N.D. Cal. Nov. 4, 2022).

(9) Blair Robinson, X Corp Lawsuits Target Data Scraping, National Law Review (Aug. 17, 2023), https://www.natlawreview.com/article/x-corp-lawsuits-target-data-scraping.

(10) 598 U.S. 471 (May 18, 2023).

(11) Transcript of Oral Argument at 49, Google v. Gonzales, 598 U.S. 617 (2023) (No. 21-1333).

(12) Available at https://www.ecfr.gov/current/title-11/section-110.16.

(13) Artificial Intelligence in Campaign Ads, 88 Fed. Reg. 55606 (proposed Aug. 16, 2023), https://www.federalregister.gov/documents/2023/08/16/2023-17547/artificial-intelligence-in-campaign-ads.

(14) Michael Atleson, Chatbots, deepfakes, and voice clones: AI deception for sale, Federal Trade Commission (Mar. 20, 2023), https://www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale.

(15) Tambiama Madiega, Artificial intelligence act, EU Legislation in Progress, European Parliament (June 2023), https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf.

 

Lily Li is a data privacy, AI, and cybersecurity lawyer and founder of Metaverse Law. She is a certified information privacy professional for the United States and Europe and is a GIAC Certified Forensic Analyst for advanced incident response and computer forensics.