As AI continues to advance, so do regulatory efforts. During the 2024 legislative session, 45 states along with Puerto Rico, the Virgin Islands, and Washington D.C. all introduced AI bills. With the legislative session for 2025 wrapping up, we are seeing similar tends this year.
As new legal requirements emerge, organizations across the U.S. and EU may face overlapping – yet not identical – regulations that touch on issues of bias, safety, privacy, and transparency. Additionally, these laws may categorize the same AI system differently in different jurisdictions, requiring a nuanced approach to navigating these laws.
Keeping this in mind, this article provides a brief overview of a handful of these laws. The practical takeaway? Businesses operating in the U.S. or EU should be aware of their legal requirements. Additionally, these organizations may want to consider a programmatic, auditable, and documented approach to AI governance, which may allow the business to map their AI controls to multiple legal frameworks.
Converging Themes
While details of AI laws differ across jurisdictions, trends seem to be converging on risk-based classification, transparency requirements, and enforcement efforts.
Regulators are moving toward risk-based classification. This means AI uses are categorized according to their use case (and the risk associated with that use case). As seen in the EU AI Act, the Colorado AI Act, and TRAIGA, systems may be prohibited or classified by risk. High-risk systems tend to have stricter governance, testing and documentation requirements.
Another shared theme is transparency. Laws including the EU AI Act, Colorado AI Act, Utah AI Policy Act, may require covered entities to tell people when AI is in use, while other laws may require the developer or deployer to explain the logic behind certain outputs, and provide consumers with a methods of contesting certain decisions, or opt out of certain types of decisionmaking entirely. The California AI Transparency Act and the EU AI Act may also require labeling of certain AI-generated content.
Finally, enforcement is sharpening. The EU AI Act comes with regulatory teeth, with fines of the higher of €35,000,000 or 7% global annual turnover for violation of prohibited practices. In the U.S., state attorneys general and regulators have been active in monitoring AI missteps, including consumer protection and privacy violations. For example, attorneys general in Massachusetts and Oregon have issued advisories on how consumer protection laws apply to AI, while Texas Attorney General Ken Paxton reached the first-of-its-kind settlement in a healthcare generative AI investigation.
The European Union Artificial Intelligence Act (EU AI Act)
Overview: The EU AI Act is the world’s first comprehensive AI regulation and sets a high-water mark for governance expectations. The Act is technology neutral and uses risk-based classification to sort AI systems into risk-tiers, each with escalating obligations.
Key Provisions:
- Prohibited systems include cognitive behavioral manipulation, most real-time biometric identification, and systems used for social scoring. These systems are considered to pose an unacceptable risk to safety or fundamental rights.
- High-risk systems include hiring tools, biometric identification, and critical safety technology. They must undergo conformity assessments, maintain technical documentation, and ensure human oversight.
- Limited-risk systemsinclude chatbots, deepfake generators, and public facing generative AI. These systems have transparency obligations to ensure users understand they are interacting with AI.
- Minimal-risk systems include AI-enabled spam filers, grammar checkers, and basic AI in video games. These systems have no specific obligations under the Act, but best practices are encouraged.
Key Dates & Enforcement:
- February 2, 2025: Prohibitions on certain AI systems and requirements on AI literacy start to apply.
- August 2, 2025: Rules on general practice AI models, governance, confidentiality, and penalties start to apply.
- August 2, 2026: The remainder of the AI Act (except for Article 6(1)) applies.
The Act will be enforced by European AI Office and national market surveillance authorities. Non-compliance with the prohibition of AI practices is subject to an administrative fine of up to €35,000,000 or up to 7% worldwide annual turnover, whichever is higher. Non-compliance with other provisions shall be subject to administrative fines of up to €15,000,000 of up to 3% of its total worldwide annual turnover, whichever is higher.
Colorado: Consumer Protections for Artificial Intelligence Act (CO AI Act)
Overview: Enacted in May 2024, the CO AI Act was the first far-reaching AI law in the United States. This Act primarily focuses on high-risk AI systems, including but not limited to those which influence “consequential decisions” – those impacting areas such as employment, education, housing, healthcare, finance, insurance, legal services, and essential government services.
Key Provisions:
Developer and deployers must both exercise “reasonable care” to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. For both, this may include providing notice to the Colorado Attorney General within 90 days of becoming aware of new discrimination risks.
- Developers. There is a rebuttable presumption that the developer used reasonable care if they disclose, among other things:
- reasonably foreseeable uses and known inappropriate or harmful uses of the AI system (including of algorithmic discrimination) and the measures taken to mitigate them;
- the intended purpose, benefits, uses and outputs of the AI system; and
- high-level summaries of the data types used to train the AI system, including data governance measures.
- Deployers must also exercise reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. Similarly, there is a rebuttable presumption that the deployer used reasonable care if they complete the following, among other things:
- a risk-management program that considers the NIST AI Risk Management Framework (AI RMF) or another similarly recognized risk management framework with substantially similar requirements (for more information about conducting an AI Risk Assessment, you can check out our post here);
- an impact assessment, that includes the purpose, use cases, deployment context, and an analysis of whether it poses any foreseeable risks of discrimination, along with steps taken to mitigate those risks;
- notice to consumers when certain systems are being used that include the system purpose, contact information, and options to opt-out of AI processing for that purpose, correct personal information used in the decisionmaking process, and appeal the decisionmaking process.
- Disclosure should be clear. Regardless of risk level, any AI system that is directly interacting with Colorado consumers must disclose that it is an AI system, unless that would be obvious to a reasonable person.
Key Dates & Enforcement: While this law was originally set to take effect in 2026, Colorado Governor Polis called a special legislative session to address budget issues, taking place on August 21. The impact of SB24-05 (Consumer Protections for AI) is on the agenda, which may result in a delayed enforcement deadline and substantive changes to the law’s provisions.
Violations are treated as deceptive trade practices under Colorado’s Consumer Protection Act, subject to enforcement by the Colorado Attorney General and penalties of up to $20,000 per violation.
Texas Responsible AI Governance Act (TRAIGA)
Overview: While TRAIGA originally provided a comprehensive AI framework, the final version has been significantly pared down. With narrow substantive provisions, TRAIGA focuses on harms caused by AI, and the Act regulates – or completely bans – certain uses of these systems.
TRAIGA applies broadly to private sector companies if they provide AI-generated content or services to Texas residents, even if they are located outside the state of Texas. Additionally, government agencies interacting with the public fall squarely within the scope of the Act.
You can read more about TRAIGA at our blog post covering the Act here.
Key Provisions:
- Prohibited AI For Public and Private Sectors include but are not limited to intentionally inciting self-harm, violence or crime; infringing on an individual’s rights; or unlawfully discriminating (with purposeful intent). The Act also prohibits deploying AI systems that intentionally generate illegal content, as well as child sexual abuse material or sexually explicit chat systems that impersonate children.
- Prohibited AI uses for the Public Sector include but are not limited to social scoring and uniquely identifying individuals with biometric data (with limited exceptions).
- Transparency Requirements for Public Sector may require governmental agencies to, among other things, provide conspicuous notice to consumers that they are acting with an AI system.
Key Dates & Enforcement:
TRAIGA was signed into law in June 2025 and takes effect on January 1, 2026. With no private right of action, the Act can only be enforced by the Texas Attorney General. The Act requires the Attorney General to create an “online mechanism” on their website where consumers can submit complaints of potential violations.
If the Attorney General determines a violation has occurred, there is a 60-day cure period. If the violation continues after this period, the Attorney General may bring a claim for, among other things:
- an injunction;
- a civil penalty for curable breaches between $10,000 and $12,000;
- a civil penalty for uncurable breaches between $80,000 and $200,000; and
- a civil penalty for each day of continued violation between $2,000 and $40,000.
California CCPA Draft Regulations
Overview: On July 24, 2025, the California Privacy Protection Agency (CCPA) board voted 5-0 to finalize Draft Regulations to the California Consumer Privacy Act (CCPA). The CPPA sent the rulemaking package to the Office of Administrative Law, which has 30 days to approve the regulations. For a deeper dive on the CCPA Draft Regulations, please see our post here.
Key Provisions:
- Automated-decisionmaking (ADMT): Businesses must inform consumers with a pre-use notice and provide opt-out rights when AI or automated tools influence “significant decisions,” including those about employment, education, housing, healthcare, financial or lending services, and similar areas.
- Risk Assessments: Organizations engaging in high-risk data processing (such as the decisions covered in ADMT, above) must conduct risk assessments before beginning processing, and must update them regularly, including within 45 days of any material change of the system. For more information about conducting an AI Risk Assessment, you can check out our post here.
- Cybersecurity Audits: Businesses meeting certain thresholds must undergo annual, evidence-based audits carried out by a “qualified, objective, independent professional.” The audits must rely on specific evidence (as opposed to assertions by the business management), and all information related to the audit should be kept for a minimum of five years after completion.
Key Dates & Enforcement:
Compliance with these Draft Regulations will be required once they are approved by the Office of Administrative Law. The deadlines include:
- ADMT Regulations: January 1, 2027
- Privacy Risk Assessments: December 31, 2027
- Cybersecurity Audits:
- For businesses with $100+ million in annual gross revenue: April 1, 2028.
- For businesses between $50 million and $100 million in annual gross revenue: April 1, 2029.
- For businesses with less than $50 million in annual gross revenue: April 1, 2030.
Other Laws to Consider
Along with the more far-reaching laws provided above, there are additional laws that businesses may want to consider when building, implementing, or otherwise engaging with AI tools or systems.
- Utah’s Artificial Intelligence Policy Act
- Effective as of May 2024, this Act mandates certain disclosures when businesses use generative AI to interact with consumers. This applies specifically to “regulated professions,” where the provider shall make the disclosure prominently, regardless of whether it is obvious the person is interacting with an AI system or not.
- New York City’s Local Law 144 (and other AI employment regulations)
- Signed in 2021, this law applies to employers and employment agencies in New York City that use “automated employment decision tools” to screen candidates or employees. It requires that an independent bias audit be conducted within one year of using the AI tools. For more information on AI in employment, see our article on AI In the Workplace: Legal Considerations for Leadership Teams.
- California’s AI Transparency Law (SB 942)
- Effective January 1, 2026, this law applies to “covered providers” – those offering generative AI systems with over 1 million monthly users in California. These providers must provide: 1) a free, public AI detection tool; and 2) certain disclosures as a label or embedded within their content.
- California’s Data Transparency Law (AB 2013)
- Effective January 1, 2026, developers of generative AI systems must post a disclosure on their website including documentation used to train the AI system. This documentation includes high-level summary of datasets used in the development of the AI system – the sources or owners of the datasets, how they further the purpose of the AI system, the number of datapoints in the datasets, and more.
Key Takeaway
As lawmakers race to keep up with the breakneck speed of AI implementation, guidance is quickly becoming enforcement. While specific requirements between these laws vary, the common thread is clear: covered entities are expected to understand, document, and justify their AI systems’ design, data, and impact.
Additionally, organizations utilizing AI should consider building responsible AI governance into their operations. By incorporating these governance processes into everyday systems and – similar to those for privacy and cybersecurity – organizations may proactively protect against legal, ethical and operational risk when implementing AI.