Info@MetaverseLaw.com

California Sets National AI Policy

 

In The Vacuum Of Federal Legislation, California Sets National AI Policy
by Lily Li | Founder of Metaverse Law | Cybersecurity & AI Lawyer

Within a day of taking office, President Trump overturned Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.[1] The agencies quickly followed suit. The EEOC and DOL withdrew their guidance on AI and workplace discrimination, and the HHS stalled on its proposed updates to the HIPAA security rule in the face of a Texas court order. This past summer, the House added a ten-year moratorium on state AI laws in the “One Big Beautiful Bill Act,” only to have this provision repealed 99-to-1 in the Senate.[2] Overall, the message from the White House is clear: deregulate AI.

These federal efforts may have created a backlash, however, by spurring an aggressive state response. Regardless of your politics and whether you think this is a good thing or a bad thing, California has stepped into the vacuum to set national AI policy. The state’s 2025 legislative session was a banner year, setting records for the number and scope of new AI laws.

From AI Safety to AI Transparency & Risk Assessments

On September 29, 2025, Governor Gavin Newsom signed Senate Bill 53 into law, the Transparency in Frontier Artificial Intelligence Act. Starting in January 2026, California will require large frontier AI developers to publish a framework detailing how they incorporate safety, security, and testing standards into their AI models. SB 53 also creates a mechanism for AI developers and the public to report critical safety incidents, and protects internal whistleblowers who report risks posed by frontier AI models. The law establishes significant penalties for companies who fail to comply, with fines of up to $1 million per violation.
Governor Newsom signed this law in order to spur federal action. In his signing note, he stated that if the federal government adopted similar or more demanding national AI standards, further action would be taken to align the policy to the national standard. Per Governor Newsom, “In enacting this law, we are once again demonstrating our leadership, by protecting our residents today while pressing the federal government to act on national standards.”[3]

Governor Newsom’s action on SB 53 contrasts with his position on AI legislation a year ago. On September 29, 2024, exactly a year before signing SB 53, he vetoed SB 1047 (the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) – the precursor bill to SB 53. The veto message pointed to the importance of the AI industry to California, and noted that the proposed legislation may overlook smaller, more dangerous models, among other concerns. Just as importantly, the veto message pointed to growing federal standards under the NIST’s U.S. AI Safety Institute, and Governor Newsom’s hope to coordinate with federal partners and experts.[4] With the U.S. AI Safety Institute scrapped and federal AI efforts under fire, Governor Newsom’s about-face on SB 53 seems less about the surface-level changes between SB 53 and SB 1047, and more about the growing political divide on AI legislation.

The California approach to AI transparency and safety legislation also needs to be read in conjunction with the California Privacy Protection Agency’s (CPPA’s) recently approved regulations. In addition to more traditional privacy concerns, the CPPA’s most recent 127-page rulemaking package contains requirements governing cybersecurity audits, risk assessments, and automated decision-making technology (ADMT).[5] AI developers and systems that process personal information and meet certain California privacy thresholds will now face substantial cybersecurity audit and risk assessment requirements. In addition, if they engage in automated and significant decisions concerning the provision or denial of financial or lending services, housing, education enrollment or opportunities, employment or independent contracting opportunities or compensation, or healthcare services, they will also have significant notice, opt out and risk assessment requirements.

These ADMT regulations also have a history. They are five years in the making and stem from Prop 24, a 2020 ballot initiative to amend California’s privacy laws. During the CPPA’s consideration of these draft ADMT regulations, Governor Newsom sent a letter to the Agency, asking them to pare down the scope of the regulations. Per Governor Newsom, “enacting these regulations could create significant unintended consequences and impose substantial costs that threaten California’s enduring dominance in technological innovation.”[6] The final version was a compromise, paring away references to generative AI and artificial intelligence generally, but maintaining the bulk of the remaining requirements.

Thus, we see the unique political circumstances of California play out in AI regulation. A double whammy through SB 53 by the state legislature, and through ADMT regulations by the CPPA (founded through the direct democracy of California’s ballot initiative process).

Civil Rights, Employment Bias, and Discrimination

In 2023, the EEOC issued guidance that cautioned employers to use AI workplace tools responsibly, addressed the use of AI software and algorithms in the employment selection processes under Title VII and employers’ compliance responsibilities related to the ADA. In 2024, the DOL further issued a document entitled “Artificial Intelligence and Worker Well-Being – Principles and Best Practices for Developers and Employers.”[7] This document recommended that employers include workers in the AI adoption process, bargain with unions in good faith regarding the adoption of AI technologies, establish AI governance and human oversight, not rely on AI systems in making “significant employment decisions” without “meaningful human oversight,” and monitor AI to safeguard worker rights, including leaves of absence, accommodations, wages, and break times. While President Trump’s recission of Biden’s executive order on AI did not expressly rescind these guidance documents, the EEOC and DOL removed these publications from their websites.[8]

In contrast, California’s Civil Rights Council promulgated regulations that implemented California’s civil rights laws, explicitly stating that California’s antidiscrimination laws apply to AI workplace tools. On October 1, 2025, these new regulations went into effect.[9] Per the new regulations, it is unlawful for an employer to use automated-decision systems or selection criteria that discriminates based on a basis protected by existing California law. Relevant to any such claim or available defense is evidence, or lack thereof, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency and scope of such an effort, the results of the testing, and the response to the results.

Healthcare AI and Chatbots

California is also taking the stage in healthcare and AI regulation. In January of 2025, California Attorney General Rob Bonta issued a “Legal Advisory on the Application of Existing California Law to Artificial Intelligence in Healthcare.”[10] This advisory set forth California’s existing consumer protection, civil rights, competition, and data privacy laws governing healthcare and highlighted the passage of several healthcare AI bills in 2024. In 2025, California continued to advance healthcare AI legislation with the passage of AB 489, prohibiting AI systems from falsely indicating or implying possession of a medical license or certificate through marketing or other functionality.[11]

AI Safeguards for Children: Converging Standards

While California and the federal government may be at odds on broad AI legislation, there is convergence in one area: protection of minors. Lawmakers on both sides of the aisle agree that children should be protected from harmful AI content, whether it is suicidal ideation, self-harm, or sexually explicit imagery. Partly, this is spurred by the tragic suicides of a teenager in Orange County, California and another in Florida, both of whom formed close relationships with generative AI systems before their deaths.[12] Partly, this is driven by the furor over a leaked internal Meta document, disclosing content standards that allowed AI systems to “engage a child in conversations that are romantic or sensual.”[13]

In California, for instance, Governor Newsom signed SB 243, landmark AI chatbot legislation that require “companion chatbots” to address suicidal ideation, sexually explicit imagery, and extended use by minors.[14] This law applies to chatbots that provide human-like interactions and capable of sustaining relationships across multiple interactions, and requires AI disclosures, referrals to suicide hotlines or crisis text lines, and break reminders. SB 243 further requires companion chatbots to institute reasonable measures to prevent the chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct. The legislation includes a private right of action to individuals who suffer “an injury in fact” with statutory damages of one thousand dollars ($1,000) per violation, or actual damages if greater. California also passed companion bills AB 1043 and 56, which further require age verification and warning labels for covered online platforms.

At the federal level, the FTC launched inquiries into seven major consumer-facing chatbot companies, asking for information on how these firms measure, test, and monitor potentially negative impacts of this technology on children and teens.[15] This followed investigations by the Texas AG into Meta and Character.AI for alleged unfair and deceptive practices towards children.[16] The Attorney Generals of fourty-four different states signed onto a letter following the alarming reports of Meta AI chatbots engaging in sexually inappropriate conversations with children.[17] Given the common interests in protecting children online, we anticipate further efforts at both the state and federal level in 2026 to impose online age verification, parental consents and additional guardrails on children’s interactions with AI systems.

Why Follow California?

While California might be motivated to set AI national policy, why do businesses and lawmakers in other states follow California’s lead? Partially, this is due to practicality. The largest AI companies train and deploy systems at scale. It does not make sense creating a different user interface and back-end system for each state, rather than embedding the highest privacy, security, and safety controls into the system as a whole. Partially, this is due to risk appetite. Unlike other state and federal AI laws, California’s legislature is far more willing to adopt private rights of action and statutory damages in its legislation – incentivizing lawsuits. This in turn develops a whole body of law that provides guidance and interpretation for statutory language that might be similar across state lines. Finally, let’s not forget that California is home to some of the largest AI companies in the world. Even a world of AI, remote work, and borderless systems, sometimes there still is a hometown advantage.

ENDNOTES

[1] White House, Initial Rescissions of Harmful Executive Orders and Actions (Jan. 20, 2025), https://www.whitehouse.gov/presidential-actions/2025/01/initial-rescissions-of-harmful-executive-orders-and-actions/
[2] Senate Strikes AI Moratorium from Budget Reconciliation Bill in Overwhelming 99-1 Vote, (July 1, 2025) https://www.commerce.senate.gov/2025/7/senate-strikes-ai-moratorium-from-budget-reconciliation-bill-in-overwhelming-99-1-vote/8415a728-fd1d-4269-98ac-101d1d0c71e0
[3] Office of the Governor, SB 53 Signing Message, (September 29, 2025) https://www.gov.ca.gov/wp-content/uploads/2025/09/SB-53-Signing-Message.pdf
[4] Office of the Governor, SB 1047 Veto Message, (September 29, 2024) https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf
[5] CA Privacy Protection Agency, TEXT OF REGULATIONS (CCPA Updates, Cyber, Risk, ADMT, and Insurance Regulations), https://cppa.ca.gov/regulations/pdf/ccpa_updates_cyber_risk_admt_appr_text.pdf
[6] Tyler Katzenberger, Big Tech has another California problem (04/25/2025 10:00 AM EDT), Politico, https://www.politico.com/news/2025/04/25/big-tech-california-data-privacy-regulation-fight-newsom-00309171
[7] Department of Labor releases AI Best Practices roadmap for developers, employers, building on AI principles for worker well-being (October 16, 2024), https://www.dol.gov/newsroom/releases/osec/osec20241016
[8] Gone but Not Forgotten: Federal Laws Still Apply Despite AI Guidance Disappearance Act, Cooley Alert (February 21, 2025), https://www.cooley.com/news/insight/2025/2025-02-21-gone-but-not-forgotten-federal-laws-still-apply-despite-guidance-disappearance-act
[9] Civil Rights Council Secures Approval for Regulations to Protect Against Employment Discrimination Related to Artificial Intelligence (June 30, 2025), https://calcivilrights.ca.gov/2025/06/30/civil-rights-council-secures-approval-for-regulations-to-protect-against-employment-discrimination-related-to-artificial-intelligence/
[10] https://oag.ca.gov/system/files/attachments/press-docs/Final%20Legal%20Advisory%20-%20Application%20of%20Existing%20CA%20Laws%20to%20Artificial%20Intelligence%20in%20Healthcare.pdf
[11] https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260AB489
[12] Associated Press, Parents of teens who died by suicide after AI chatbot interactions to testify to Congress, Orange County Register (September 16, 2025, 12:54 PM PDT), https://www.ocregister.com/2025/09/16/chatbots-teens-safety-congress/
[13] Jeff Horwitz, Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info, (Aug. 14, 2025, 6 a.m. GMT), Reuters, https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/
[14] Governor Newsom signs bills to further strengthen California’s leadership in protecting children online (Oct. 13, 2025), https://www.gov.ca.gov/2025/10/13/governor-newsom-signs-bills-to-further-strengthen-californias-leadership-in-protecting-children-online/
[15] FTC Launches Inquiry into AI Chatbots Acting as Companions (September 11, 2025) https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-launches-inquiry-ai-chatbots-acting-companions
[16] Attorney General Ken Paxton Investigates Meta and Char​ac​ter​.AI for Misleading Children with Deceptive AI-Generated Mental Health Services (August 18, 2025) https://www.texasattorneygeneral.gov/news/releases/attorney-general-ken-paxton-investigates-meta-and-characterai-misleading-children-deceptive-ai
[17] National Association of Attorneys Generals (August 25, 2025) https://oklahoma.gov/content/dam/ok/en/oag/news-documents/2025/august/AI%20Chatbot_FINAL.pdf

Lily Li
is an AI, data privacy, and cybersecurity lawyer and founder of Metaverse Law. She is a certified information privacy professional for the United States and Europe and is a GIAC Certified Forensic Analyst for advanced incident response and computer forensics. She can be reached at info@metaverselaw.com.