Info@MetaverseLaw.com

California: New AI laws in California – roundup of the 2025 legislative session

Flag of California, depicting a large brown bear beside a red star, above the words "California Republic."
Source: https://upload.wikimedia.org/wikipedia/commons/0/01/Flag_of_California.svg.

This article was originally published by OneTrust DataGuidance on November 24, 2025 and can be found on the DataGuidance website here.

California introduces comprehensive AI laws focusing on transparency, children’s safety, healthcare, antitrust, and law enforcement.

California has taken an aggressive stance towards artificial intelligence (AI) legislation and will likely set the standard for other US states. Back in 2024, Governor Newsom vetoed comprehensive AI safety legislation under bill SB 1047 and advised caution on regulations for this nascent and important technology. This year, Governor Newsom pressed ahead with a full slate of new AI laws. The reasons for this change in approach are many, including but not limited to the lack of federal AI legislation, the growing concern over children’s interactions with AI, especially sexualized content, and harmonization with more stringent requirements in the EU and elsewhere.

This year’s legislative session set records for the number and scope of new AI laws. For the roundup this year, Lily Li, of Metaverse Law Corporation, breaks down the new AI laws by scope and sector, noting where this may add on to existing California legislation and rulemaking from 2024-2025.

General AI safety, transparency, and risk assessments

  • SB 53: Transparency in Frontier Artificial Intelligence Act (Wiener) – Starting in January 2026, California will require large frontier AI developers to publish a framework detailing how they incorporate safety, security, and testing standards into their AI models. SB 53 also creates a mechanism for AI developers and the public to report critical safety incidents, and protects internal whistleblowers who report risks posed by frontier AI models. The law establishes significant penalties for companies that fail to comply, with fines of up to $1 million per violation.
  • AB 316: Artificial Intelligence defenses (Krell) – This amends California’s Civil Code. If a party to a lawsuit develops, modifies, or uses AI, this law prohibits them from asserting as a defense that the AI autonomously caused the harm.
  • AB 853: California AI Transparency Act (Wicks) – This bill expands the existing AI Transparency Act and modifies the effective date from January 1, 2026, to August 2, 2026. The California AI Transparency Act requires covered generative AI developers to provide an AI-detection tool to assess whether image, video, or audio content is created or altered by generative AI. This bill adds to the existing law by requiring large online platforms to embed provenance data into generated content. Starting January 1, 2028, users will also have the option to include latent disclosures on ‘capture devices’ such as cameras, video recorders, and other recorders.

This new California approach to AI transparency and safety legislation needs to be read in conjunction with the following existing laws.

  • California Privacy Protection Agency’s (CPPA’s) recently approved Cyber, Risk, ADMT, and Insurance Regulations – The CPPA’s most recently updated 127-page regulation package contains requirements governing cybersecurity audits, risk assessments, and automated decision-making technology. AI developers and systems that process personal information and meet certain California privacy thresholds will now face new cybersecurity audit and risk assessment requirements. In addition, automated and significant decisions concerning the provision or denial of financial or lending services, housing, education enrollment or opportunities, employment or independent contracting opportunities or compensation, or healthcare services will trigger significant notice, opt-out, and risk assessment requirements.
  • AB 2013: AI Training Data Transparency Act (Irwin-2024) – Passed last year, this law will require covered generative AI developers to publish online a high-level summary of the datasets used in the development of the generative AI system or service, including but not limited to whether personal information or copyrighted information is included in the training data. The law is scheduled to go into effect on January 1, 2026.

Children’s safety, age verifications, and companion chatbots

  • SB243: Companion Chatbots (Padilla) – This law applies to chatbots that provide human-like interactions and are capable of sustaining relationships across multiple interactions. Beginning July 1, 2027, developers of these ‘companion chatbots’ will need to develop and report protocols addressing suicidal ideation and self-harm to regulators and the public. The law requires AI disclosures, referrals to suicide hotlines or crisis text lines, and break reminders. SB 243 further requires developers to institute reasonable measures to prevent the chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct. The legislation includes a private right of action to individuals who suffer ‘an injury in fact’ with statutory damages of $1,000 per violation, or actual damages if greater.
  • AB 1043 – Digital Age Assurance Act (Wicks) – Starting January 1, 2027, operating systems and covered application stores will be required to obtain age data from users and pass on age bracket data to developers when users download and launch an application.
  • AB 56: Social Media Warning Law (Bauer-Kahan) – Starting January 1, 2027, covered social media platforms will need to display a warning label to minors the first time a user accesses the platform each day, after three hours of active use, as well as once per hour of cumulative active use after that. The warning label must say ‘The Surgeon General has warned that while social media may have benefits for some young users, social media is associated with significant mental health harms and has not been proven safe for young users.’
  • AB 621: Deepfake pornography (Bauer-Kahan) – This amends California’s Civil Code and expands protections against deepfake pornography. The law explicitly provides a cause of action against individuals who create or disclose deepfake pornography if they know, or reasonably should know, that the depicted individual was a minor and also provides a cause of action against individuals who knowingly facilitate or recklessly aid or abet the creation or disclosure of such nonconsensual deepfake pornography. The bill confirms that a minor cannot consent to the creation or distribution of deepfake pornography.

California’s approach to AI and children has a long and complicated history, and these new laws should be read in conjunction with the following laws on the books.

  • California Age Appropriate Design Code (Wicks) – This law was signed on September 15, 2022, and was scheduled to go into effect on July 1, 2024. Modeled after the UK Age Appropriate Design Code, this law requires businesses to conduct impact assessments, provide Privacy by Default, estimate the age of all users, and restrict dark patterns. The law was enjoined in March 2025, but is being appealed by the California Attorney General.
  • Protecting Our Kids from Social Media Addiction Act (Skinner-2024) – This law is scheduled to go into effect on January 1, 2027, and prohibits covered social media platforms from providing addictive feeds to minors without verifiable parental consent. The law has so far escaped a constitutional challenge, but may face other court challenges prior to the effective date.

Healthcare AI and chatbots

  • AB 489: Health care professions: deceptive terms or letters: artificial intelligence (Bonta) – This law prohibits AI systems from falsely indicating or implying possession of a medical license or certificate through advertising, marketing, or other functionality. AB 489 also makes AI developers directly subject to the healthcare professional licensing board or enforcement agency if they develop such a system. Each use of a prohibited term, letter, or phrase shall constitute a separate violation.

California’s approach to AI in healthcare also needs to be read in conjunction with the following laws and guidance.

  • Legal Advisory on the Application of Existing California Law to Artificial Intelligence in Healthcare – In January 2025, California Attorney General Rob Bonta issued this advisory, setting forth California’s existing consumer protection, civil rights, competition, and data privacy laws governing healthcare AI.
  • SB 1120: Physicians Make Decisions Act (Becker-2024) – This law prohibits covered healthcare service plans from denying, delaying, or changing healthcare services based, in whole or in part, on medical necessity using AI, algorithms, or other software tools. Such determinations shall require a physician or licensed healthcare professional and review of individual circumstances. This law also requires written policies and procedures governing such determinations.
  • AB 3030: Artificial Intelligence in Health Care Services (Calderon – 2024) – This law applies to health facilities, clinics, physicians’ offices, or other health group practices that use generative AI for communications about patient clinical information. Under this bill, generative AI, which pertains to clinical information, must include:
    • a disclaimer that indicates the communication was generated by AI at the beginning of the interaction; and
    • clear instructions on how the patient can contact the appropriate person.

Antitrust and pricing discrimination

  • AB 325: Cartwright Act violations (Aguiar-Curry)  This amends California’s existing antitrust law, the Cartwright Act, to explicitly cover ‘common pricing algorithms.’ The law prohibits:
    • the use or distribution of a ‘common pricing algorithm’ as part of a contract, combination in the form of a trust, or conspiracy to restrain trade or commerce; or
    • coercion to set or adopt a recommended price or term, recommended by the common pricing algorithm for the same or similar products or services.

Complaints shall not be required to allege facts tending to exclude the possibility of independent action.

Law enforcement use of AI

  • SB 524 Law Enforcement Agencies (Arreguín) – SB 524 requires law enforcement to disclose if an official report was written either fully or in part using AI, as well as retain the first draft created by AI and an associated audit trail that, at minimum, identifies both the officer who used AI to create a report and the video and audio footage used to create a report, if any. SB 524 also prohibits AI vendors from sharing, selling, or otherwise using information, except as provided in the bill (e.g., troubleshooting, bias mitigation, quality control, legal purposes, etc.).

Employment and bias

While Governor Newsom vetoed SB 7, the No Robo Bosses Act, the Governor’s veto letter pointed to the CPPA’s ADMT regulations as addressing some of the bill’s requirements. Per Governor Newsom, SB 7 is ‘partially covered’ by these regulations, as they ‘allow employees and independent contractors to better understand how their personal data is used by automated decision technology.’ In addition, the California Civil Rights Council’s recently promulgated regulations state that California’s antidiscrimination laws apply to AI workplace tools. These regulations address another concern raised in SB 7, which sought to prohibit ADS systems from inferring a worker’s protected status.