This article was initially published in Reuters and Thomson Reuters Westlaw Today.
Lily Li of Metaverse Law discusses the landscape for AI legislation, with the passage of the European Union’s AI Act while states pass AI bills with differing thresholds, coverage and subject matter.
The landscape for EU and US AI legislation feels like a rinse and repeat of data privacy legislation in 2020. Back then, the General Data Protection Regulation (GDPR) was in full force and effect, while California and other states were developing privacy laws at breakneck speed. Many companies were caught unaware by GDPR, only to face a new onslaught of US state-by-state privacy laws.
Now, companies face the same problem. The EU has just passed a comprehensive AI law, the EU AI Act, which imposes significant compliance obligations and antitrust-style mega fines.
In the United States, state legislatures are passing AI bills at a breakneck speed, with differing thresholds, coverage and subject matter. Do global companies bite the bullet and comply with the EU AI Act globally, or should there be a more nuanced jurisdiction-by-jurisdiction approach?
Comprehensive and imposing
The EU AI act is a comprehensive law that has been in development for years by EU regulators. One of its unique features, not seen in US legislation, is a complete ban on certain “prohibited AI practices” (Article 5, https://bit.ly/4gQHfe8). Some of these prohibited practices include assessing whether an individual is likely to commit a crime and real-time biometric identification by law enforcement (think Minority Report), as well as social scoring of individuals.
In addition to setting forth prohibited practices, the EU AI Act designates a list of high-risk AI practices. This includes, but is not limited to, use of AI in employment decisions, credit scores, insurance and access to services. For these high-risk AI practices, AI providers need to implement a full risk management program that considers the following factors:
- Data governance
- Technical documentation
- Recordkeeping
- Human oversight
- Accuracy, robustness, and cybersecurity management
- Quality management
Like the GDPR, the EU AI Act imposes significant fines. This can be up to $35,000,000 or 7% of total worldwide revenue, whichever is higher, for engaging in prohibited AI practices (Article 99, https://bit.ly/3XRewgl), and up to $15,000,000 Euros or 3% of the total worldwide annual turnover, whichever is higher for other violations (Article 99, https://bit.ly/3XRewgl). The law requires each EU country to designate at least one independent and impartial body to monitor and enforce the EU AI Act’s requirements.
In contrast, the US is following a patchwork approach. Instead of comprehensive federal legislation, we are seeing a state by state and agency approach. To date, these laws generally fall into four main categories: (i) consumer protection; (ii) employment rights; (iii) image and likeness rights; and (iv) transparency/ risk assessment requirements for high-risk AI processing.
Consumer protection
For state consumer protection laws governing AI, Utah is one of the first movers. In May of 2024, it added requirements governing AI to its consumer protection statutes. Utah’s AI Policy Act requires businesses in Utah to disclose the use of generative AI tools, and also makes businesses liable for any consumer protection violations by these generative AI tools.
At the federal level, the FTC has used its consumer protection authority under Section 5 of the FTC Act, in order to regulate against unfair and deceptive practices in commerce concerning AI. In 2022, Weight Watchers agreed to pay a $1.5 million civil penalty in a settlement with the FTC, in part over allegations that the company improperly collected children’s data to train its models and algorithms. This settlement included “algorithmic disgorgement” — i.e., Weight Watchers was required to delete any models trained on such data.
More recently, on Sept. 25, 2024, the Federal Trade Commission (FTC) has cracked down on companies that make misleading or fraudulent claims about their use of AI tools. This included taking action against DoNotPay (https://bit.ly/3BtSWXW), a company that claimed to offer an AI service that was “the world’s first robot lawyer.”
DoNotPay agreed to a $193,000 settlement with the FTC, pursuant to a consent order. The consent order (https://bit.ly/4dNyjmN) also requires DoNotPay to refrain from “representing that its Service or any other internet-enabled product or service that it offers operates like a human lawyer or any other type of professional, unless that representation is not misleading and DoNotPay possesses competent and reliable evidence to substantiate the representation.” In addition, DoNotPay is required to notify consumers of the order and to submit compliance reports to the FTC.
AI in employment decisionmaking
At the employment level, Illinois recently enacted a law that prohibits the use of AI systems from discriminating against employees or job applicants based on any protected classes.
In addition, this amendment explicitly bans the use of race or zip code when used as a proxy for race in AI systems making employment decisions. Illinois’ requirements join New York City Local Law 144 (https://on.nyc.gov/3zHlSva) in regulating automated employment decision-making tools. While Local Law 144 does not include an explicit ban on the use of race or zip code in AI systems, it has very stringent notice and audit rights.
Where employers use AI systems “to substantially assist or replace discretionary decision making,” Local Law 144 requires publicly available third-party bias audits of automated employment decision-making tools.
Image and likeness rights
Generative AI is also regulated by state laws and cases governing image and likeness rights. Following the actors and writers strike in Hollywood, and high-profile litigation by Sarah Silverman and others, California has acted. In the last week, Governor Gavin Newsom signed two AI bills designed to protect entertainers.
AB 2602 requires contracts with actors and other performers to specify whether generative AI will be used to create a replica of the performer’s voice or likeness. AB 2836 bans the use of digital replicas for deceased performers, without the consent of the performer’s estate.
Transparency and risk assessment
The majority of US state comprehensive data privacy laws require transparency concerning the use of AI to process personal data and make decisions that impact important rights, such as employment, housing, and access to services. In addition, these laws generally give consumers the right to opt out of such processing.
Colorado’s AI Act, slated to go in effect in 2026, goes even further. It imposes risk assessment and bias assessment requirements for any “high-risk artificial intelligence system” that makes or is a substantial factor in making a consequential decision.
For purposes of the law, “consequential decision” means a decision that has a material or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of:
- Education
- Employment
- Financial or lending services
- Essential government services
- Health-care services
- Housing
- Insurance
- Legal service
The Colorado AI Act has even more substantial transparency and notification obligations. As just one example, developers and deployers of “high-risk” AI systems are required to publicly post on their websites a description of the high-risk systems, as well as describe how the AI system manages the risks of bias. This includes further reporting to the Attorney General of “any known or reasonably foreseeable risks of AI discrimination arising from the intended use of the system.” Section §6-1-1702(5).
Where to go from here?
The trend lines are clear, and AI legislation is here to stay. While the US has not enacted federal AI legislation of the same scope as the EU AI Act, we already see significant risk assessment and transparency requirements. As a result, AI companies need to go global with their AI risk management strategies and not get left behind.
Lily Li is the founder and president of Metaverse Law. She advises global clients on their AI risk assessments and data protection impacts assessments, and supports her clients’ overall governance, risk, and compliance (GRC) programs. In addition, she holds the GIAC Certified Forensic Analyst (GCFA) certification for advanced incident response and digital forensics and certifications in information privacy such as the FIP, CIPP/US/E/M. She is based in Newport Beach, California, and can be reached at info@metaverselaw.com.