In a vote of 4-1, the California Privacy Protection Agency (CPPA) has decided to move forward with rulemaking of its draft regulations concerning AI, cyber audits, profiling and risk assessments, despite complaints of regulatory overreach.
On Friday, November 8, the CCPA held a public meeting to discuss proposed updates to the California Consumer Privacy Act (CCPA) regulations. The hybrid meeting included public comments from a broad range of stakeholders – nearly 45 public comments were heard from business representatives, privacy advocates, and industry experts. While the passing vote would have typically triggered a 45-day public comment period on the draft regulations, Chairperson Urban requested flexibility, considering the upcoming holidays.
Legal Challenges
During the meeting, the CPPA stated that it was sued for failing to promulgate regulations, specifically on opt-out rights of information processed by automated decisionmaking tools (ADMTs). At the same time, commentators argued that the breadth of the proposed rules overstepped the intent of the CCPA.
Board Member Alastair Mactaggart–who helped draft the CCPA–voiced concerns about the regulations, arguing that the current proposed regulation is excessively broad to the point of being unworkable. He pointed out that these regulations, as written, apply to nearly all businesses that use any kind of software to generate any type of output–whether it’s AI-powered or not. For example, a simple tool like a spreadsheet or a school admission application could fall under these rules, forcing a large swath of low-risk businesses to conduct risk assessments. Mactaggart referred to this as statutory overreach and claimed that regulations should be focused on issues that genuinely impact privacy or security.
Economic Forecasts
The CPPA also issued a Standardized Regulatory Impact Assessment (SRIA) which was discussed during the meeting. In this assessment, the CPPA estimates the total cost of this regulatory initiative to be around $3.5 billion for the first year of implementation, with an average of $1 billion each subsequent year for the first ten years. The CPPA justifies this cost, asserting that the direct benefits to California businesses will be $1.5 billion in 2027, and $66.3 billion in 2036.
However, the California Chamber of Commerce states that “[b]usinesses, consumers and governments in California will suffer net losses from the proposed rules pending before the [CPPA] this week.” This statement stems from a report prepared for the Chamber of Commerce by Capitol Matrix Consulting, which concludes that the regulations are likely to “result in a substantial net losses to businesses, consumers, and governments in this state, both in the near and long term.”
Industry groups including TechNet, the Civil Justice Association of California, and the Interactive Advertising Bureau voiced concern about the heavy compliance burden that regulations place on businesses–especially small businesses that may not have the recourses to implement the required risk assessments or redesign their services to accommodate opt-out provisions.
Behavioral Advertising & Opt-Out Provisions
Another key point of contention during the meeting was the opt-out provision for consumers related to decisions made by AI systems.
The draft regulations govern a large range of AI. Under the draft, AI is defined as a “machine-based system that infers, from the input it receives, how to generate outputs that can influence physical or virtual environments.” Additionally, the draft defines ADMTs as “any technology that processes personal information and uses computation to execute a decision, replace human decisionmaking, or substantially facilitate human decisionmaking.”
Together, these definitions are more expansive than the definition of the high-risk automated processing addressed in Article 22 of the EU’s GDPR, the source of the original opt-out language. Under Article 22, a consumer has the right to opt out of decisions made by solely automated systems. The intent of this provision is to give consumers the ability to opt out of decisions that may be made on solely automated processes, such as targeted advertising.
However, critics argue that including the opt-out language in the draft in combination with an expansive definition of AI and ADMTs could have unintended consequences, especially for small businesses. Mactaggart, for instance, is concerned that applying this opt-out rule too broadly could lead to a breakdown of essential services. For example, online booking services for airlines and automated reservation software for hotels may rely on software that would be categorized as “AI” under this definition. Allowing users to opt out of using AI when asking for these services may be untenable, which could cause friction in these industries and ultimately could cause harm to consumers by limiting access to these services or increasing costs.
Risk Assessments
A central component of the draft regulation is for businesses who use AI, as defined above, to conduct risk assessments. While the goal of this requirement is to ensure that businesses are aware of and mitigate any potential privacy risks that arise from these technologies, critics believe the regulations go too far by applying the requirement to low risk, everyday activities.
For example, a representative from the California Grocery Association expressed concerns about how the opt-out provision would impact a chain of small rural grocery stores with whom she conducts business. While these AI tools could be used to help consumers save money, the cost of compliance to integrate these tools might not be within reach, especially given the thin profit margins within the grocery industry.
Again, Mactaggart questioned the scope of the draft. He and other advocates called for a narrower focus for risk assessments that centers on significant decisions–such as those that deny individuals access to essential goods and services. This could include the denial of a loan application, exclusion from an online platform, or an adverse employment decision. One commenter stated that there have been no public comments against regulating high-risk systems, and by focusing on these issues, the CPPA could better mitigate potential harms. At the same time, this would free low-risk systems from potential overregulation.
Additionally, a commentor suggested that risk assessments should be streamlined and aligned with other state standards to reduce compliance costs. Mactaggart notes that accepting risk standards from other US jurisdictions could help businesses avoid duplicative efforts, cut compliance costs, and reduce the overall regulatory burden.
AI Training
The ability to opt out of training for AI datasets was of lesser concern but was still addressed by a number of commentors. For example, a representative from the Software and Data Industry Association argued that requiring an opt-out from consumers from AI dataset training could create a substantial burden on small businesses who already have trouble accumulating representative training data. Other commentors shares concerns that these opt-outs could compromise the quality and effectiveness for AI systems.
Ultimately, California faces a delicate balance in regulating AI and ADMT. On one hand, the state must work toward protecting consumers from privacy risks, potential discrimination, and other adverse impacts of AI. At the same time, the CPPA must ensure that rulemaking does not stifle innovation, create excessive compliance costs, or diminish competition between businesses that rely on AI.
As formal rulemaking moves forward, it will be crucial for the CPPA to consider feedback from the public comment period and to refine the regulations to ensure that they strike a balance between privacy concerns and costs to consumers and businesses alike.