Introduction
As the United States transitions to a new administration, federal policymaking is beginning to shift away from civil rights and other Biden-era AI governance priorities and towards AI policies focused on “out-innovating the rest of the world,” securing US technological advantage, and national security, defense, and cybersecurity. In the meantime, states will play a critical role in technology policy, as they continue to innovate on privacy, AI, and cyber policymaking.
Among the raft of state-level AI, privacy, and cyber laws that passed in recent years – including Colorado’s sweeping AI Act – and the many more proposals percolating within state legislatures as the 2025 legislative session opens, the California Privacy Protection Agency (CPPA) has proposed new regulations that would, if adopted, establish AI, privacy, and cybersecurity norms likely to ripple across the US market.
Specifically, on November 8, 2024, the CPPA advanced proposed regulations implementing the following provisions of the California Consumer Privacy Act (CCPA):
- Risk Assessments. “Businesses” regulated by the CCPA would need to conduct and document risk assessments for a variety of enumerated activities, including “selling” personal information, “sharing” personal information for “cross-context behavioral advertising,” processing “sensitive personal information” (such as government IDs, financial information, precise geolocation, biometric data, health and genetic information, and information relating to race, ethnicity, and sexual orientation), using “automated decision-making technologies” (ADMT) which replace or substantially facilitate human decision making, engaging in “extensive profiling,” and training AI models. AI is defined within the proposed regulations as a “machine-based system that infers, from the input it receives, how to generate outputs [including predictions, content, recommendations, or decisions] that can influence physical or virtual environments” and may operate with “varying levels of autonomy” to achieve “explicit or implicit objectives.” The regulations provide detailed guidance on the content of these risk assessments and prohibit businesses from proceeding with an activity when its risks outweigh the benefits.
- ADMT Notices, Opt-Outs, and Explanations – Including a New Right to Opt Out of First-Party Advertising. The proposed regulations require businesses to provide consumers with “pre-use notices” before using ADMT and permit consumers to challenge decisions made using ADMT. Businesses would also need to provide consumers with explanations of the logic employed by any ADMT systems, among other requirements. Importantly, the CPPA’s proposed regulations would extend these ADMT requirements to AI-assisted decisions – not just “solely” automated decisions as in other privacy laws, such as the EU’s General Data Protection Regulation (GDPR). The proposed regulations also seek to create an opt-out of first-party advertising that was not previously regulated by the CCPA.
- Cybersecurity Audits. The proposed regulations would require businesses meeting certain thresholds relating to the volume and sensitivity of personal information they process to conduct annual “cybersecurity audits” meeting specified content, scope, and methodology Compliance with the audit requirements will need to be certified annually by a member of the board or high-ranking executive. The proposed regulations will indirectly compel a wide range of businesses not currently subject to prescriptive cybersecurity requirements to improve their security controls and overall cyber program.
- CCPA Application to Insurers. The proposed regulations would apply the CCPA to state-regulated insurance companies, which currently fall outside the scope of the CCPA for any activities regulated by the Gramm-Leach-Bliley Act (GLBA).
The CPPA held a hearing on January 14, 2025, to provide an opportunity for public comment on the proposed rules, with an additional hearing and extended deadline for public comment on February 19. If the agency votes to proceed with rulemaking following public comment, the agency could move to finalize the regulations as soon as April 1, 2025. Any substantial changes would require additional public consultation.
Below are the key takeaways for business regulated by the CCPA.
Risk Assessments
The draft regulations require businesses to conduct risk assessments for data processing that presents “significant risk” to consumers. Risk assessments would need to identify the benefits and risks of the proposed processing activities, as well as the safeguards employed to address such risks.
While risk assessment requirements exist in other privacy laws, including the GDPR (in the form of a “data protection impact assessment”) and other US state laws, the proposed regulations would impose several novel requirements, including:
- Extension to new types of processing activities. The proposed regulations require businesses that engage in certain specified activities to conduct risk assessments. Such specified activities include “selling” personal information, “targeted advertising,” use of ADMT to make certain types of “significant decisions” (such as determining compensation, hiring, allocation of work, promotions, admission into an academic program, etc.), and processing “sensitive personal information.” All of these activities would also require risk assessments under several other state consumer privacy laws. In addition to these activities, the proposed regulations require risk assessments for activities that are not subject to risk assessment requirements under other state laws, including:
- “Extensive profiling,” defined as systematic observation of job applicants, employees, or students, systematic observation of a publicly accessible place, or profiling a consumer for behavioral advertising; or
- Using personal information to “train” certain types of AI/ADMT models. “Training” is defined as the process through which ADMT or AI “discovers underlying patterns, learns a series of actions, or is taught to generate a desired output.” This requirement applies only to AI and ADMT models with the following characteristics:
- Generative models, such as large language models (LLMs); or
- Models that are capable of being used for: “profiling” consumers (i.e., to analyze interests, preferences, reliability, behavior, location, health, performance at work, etc.); making significant decisions (i.e., decisions that affecting financial or lending services, housing, insurance, education enrollment, employment opportunities, healthcare, or essential goods or services, etc.); establishing individual identity, or for physical or biological identification (i.e., information that depicts or describes a consumer’s physical or biological characteristics, or measurements of or relating to their body, such as biometric information, vocal intonation, facial expression, and gestures); or generating a deepfake (i.e., manipulated or synthetic audio, image or video content that is falsely represented as a truthful depiction of a consumer).
- Specific, additional requirements for AI and ADMT applications. The proposed regulations would require businesses that use AI or ADMT to document the “completeness, representativeness, timeliness, validity, accuracy, consistency, and reliability” of personal information used in connection with AI and ADMT, as well as the “logic of the [ADMT], including any assumptions or limitations in the logic.” Service providers that make AI or ADMT available to their business customers would be required to provide such customers with the facts necessary to permit them to conduct their own risk assessment.
- Substantive restrictions on activities that involve disproportionate risks to consumers. The proposed regulations prohibit businesses from engaging in activities in which the risk to consumers outweighs the intended benefits to consumers, the business, or other third parties. However, the proposed regulations do not specify how businesses should weigh the relative risks and benefits of an activity, particularly where such risks and benefits accrue to different parties, or whether a business’s analysis is entitled to deference from regulators. If the proposed regulations take effect without material changes, enforcement actions and judicial decisions will be critical to understanding where businesses will need to draw the line regarding such risks and benefits.
- Privilege and confidentiality considerations. The proposed regulations require businesses to disclose their risk assessment in its entirety to the CPPA within 10 days of a request by the agency. Businesses will need to carefully consider how to protect any confidential and/or privileged information that forms the basis of a business’s assessment.
The proposed regulations allow a business to repurpose a CCPA risk assessment for compliance with other applicable laws and for other “compatible” processing activities. Given that other US state risk assessment requirements are less prescriptive, the CPPA’s proposed approach is likely to become the default standard for US risk assessments.
ADMT
Drawing from frameworks on both sides of the Atlantic, the CPPA’s proposed ADMT regulations bridge the gap between ADMT requirements that have been a longstanding feature of privacy laws and the newer generation of AI-specific laws, such as the EU AI Act and the Colorado AI Act.
Key features of the proposed framework include:
- Expansion of ADMT to AI-assisted decisions, not just decisions that are “solely” automated. The proposed regulations define ADMT as “any technology that processes personal information and uses computation to execute a decision, replace or substantially facilitate human decision-making” (emphasis added). The extension of ADMT requirements to tools that facilitate human decisions could bring into scope a wide range of technologies that do not make solely automated decisions, such as analytic and diagnostic tools designed to inform human judgment, rather than replace it.
- Application of ADMT requirements to AI training, “public profiling,” and “extensive profiling” even in the absence of a significant decision. Previous ADMT frameworks have focused on decisions with substantial impact for consumers, such as “legal” or “significant” decisions under GDPR, “eligibility” decisions under the US Fair Credit Reporting Act (FCRA), and “consequential” decisions under the Colorado AI Act. The proposed regulations would extend further, affecting technologies that “profile” consumers, including for advertising, employment or educational purposes, and that use a consumer’s personal information to train an ADMT system capable of making significant decisions, profiling consumers, establishing identity, or generating deepfakes, even if the technologies have not been used to make any significant decisions involving the consumer.
- Separate “pre-use notices” must describe any proposed use of ADMT. The proposed regulations would require businesses to provide consumers with a “pre-use notice” – before employing ADMT – that explains key features of the ADMT system, including its “logic.” Pre-use notices will require additional specificity around data uses that may exceed businesses’ current disclosure practices. For example, describing processing activities “in generic terms, such as ‘to improve our services’” will not be sufficient.
- Consumers can opt out of ADMT, with limited exceptions. The proposed regulations permit consumers to opt out of ADMT, except for certain security, fraud prevention, and safety purposes. In addition, in the employment and educational contexts, businesses can give consumers the option to have a human review the decision, rather than allowing them to opt out altogether. Businesses cannot ask consumers for proof of identity for opt-out requests, unless the business has a good faith, reasonable, and documented belief that a request is fraudulent.
- A new right to opt out of first-party advertising. By extending ADMT requirements to “behavioral advertising,” which the CPPA defines to include targeted advertising based on a consumer’s activity within the business’s own “distinctly-branded websites, applications or services” – in additional to “cross-context behavioral advertising” already regulated by the CCPA – the proposed regulations create an opt-out of first-party advertising that was not previously regulated by the CCPA. This new requirement generated significant feedback during the January 14 hearing, with commenters noting the risk of the CPPA exceeding its statutory authority.
- “Access” and “explanation” requirements will create challenges for LLMs and black-box systems. The proposed regulations grant consumers a broad right of access that includes “plain language explanations” of the ADMT’s outputs, how the business plans to use the outputs, and “how the [ADMT] worked with respect to the consumer.” Explaining the logic of ADMT systems, not only in general terms but also as applied to any particular consumer, is likely to present significant challenges for businesses that employ sophisticated AI tools.
- Significant overlap with other federally-regulated sectors. Many of the examples of ADMT in the proposed regulations overlap with activities that are regulated by federal frameworks and exempted from the CCPA. For example, the proposed regulations define ADMT to include housing, employment, and financial eligibility decisions that often rely on “consumer reports” regulated by FCRA. While the CCPA expressly exempts from its scope personal information regulated by such frameworks, the inclusion of these examples within the proposed regulations raises questions about how the CPPA will interpret the breadth of its authority.
“Cybersecurity Audits”
The draft regulations require covered businesses to conduct annual, independent cybersecurity audits that assess and document how the business’s cybersecurity program protects personal information from unauthorized access, destruction, use, modification, or disclosure; and protects against unauthorized activity resulting in the loss of availability of personal information.
Businesses whose processing of consumers’ personal information presents “significant risk to consumers’ security,” would be required to comply with the cybersecurity audit requirements. Businesses whose processing presents a “significant risk to consumers’ security” includes businesses that:
- Derive 50% or more of their annual revenue from selling or sharing consumers’ personal information; or
- Made over $28 million[1] in gross annual revenue the preceding year and:
- processed personal information of 250,000 or more consumers or households in the preceding calendar year; or
- processed the “sensitive personal information” of 50,000 or more consumers in the preceding calendar year.
Key considerations include:
- Extension of auditing requirements to whole classes of businesses not currently subject to cybersecurity regulation. US cybersecurity laws and regulations historically focused on specific regulated sectors, such as financial services, healthcare, and critical infrastructure. The proposed regulations, by contrast, are sector-agnostic and could apply to any business that meets the data processing thresholds described above.
- Expanded and more detailed scope. The proposed scope of the cybersecurity audit includes both broad and specific requirements. Broadly, the audit must include an assessment of the business’s cybersecurity program appropriate to its size, complexity, the nature and scope of processing activities, and the cost of implementing the components of a cybersecurity program. Specifically, the audit must include an assessment of 18 categories with numerous subcategories and examples that align closely to the requirements of prevailing cybersecurity frameworks and sector-specific audits, such as those that appear within NIST’s Cybersecurity Framework and financial sector security frameworks.
- Qualified and independent auditors. Cybersecurity audits would need to be performed by a qualified, objective, and independent professional. While the proposed regulations permit internal auditors to perform cybersecurity audits, internal auditors must report directly to the board or governing body and not to business management.
- Certification requirements create an avenue for individual board member or executive liability. The draft regulations require businesses to submit a certificate of completion directly to the CPPA each calendar year signed by a member of the board or governing body, or if no such board or equivalent exists, the business’s the highest-ranking executive responsible for cybersecurity. In other contexts, such as Sarbanes-Oxley Act compliance, similar certification requirements have been used to create personal liability for the attesting company officer. Businesses will need to carefully oversee their cybersecurity audits and document their practices to protect responsible officers and the company from enforcement risk.
- Required documentation of inapplicable controls. If a business determines that it does not need to have any one or more of the security controls listed within scope of the CPPA’s proposed audit requirements, the business must document and explain, within the cybersecurity audit, why such control is not necessary to the business’s protection of personal information, and how the safeguards in place provide at least equivalent security. Such a requirement may force many companies to accelerate or take on investment in maturing their cybersecurity controls and risk management processes.
- Record retention. Records of previously conducted cybersecurity audits must be retained for five years.
- Assessment of prior audits. Cybersecurity audits must include existing gaps or weaknesses and address the status of gaps or weaknesses identified in previous cybersecurity audits (and any corrections or amendments made from such previous cybersecurity audits).
- Security incident documentation, with implications for incidents that have no connection to California. If a business has experienced a security incident that it notified to any regulator with jurisdiction over privacy laws or other data processing authority – anywhere in the world – the proposed regulations require the business to include within the cybersecurity audit a description of the incident and copies of relevant notifications.
- Equivalent audits would not need to be duplicated. The proposed regulations would not require a business to duplicate a cybersecurity audit if it already conducted one that complies with the requirements. Due to the highly prescriptive criteria for required audits under these proposed rules, it is unlikely that businesses will be completing such extensive audits that this criteria would be met independently.
Insurance Companies
The proposed regulations require insurance companies subject to the California Insurance Code to comply with the CCPA for any collection of personal information not preempted by the Code. Currently, personal information processed by state-regulated insurance companies in connection with any financial products or services they offer is usually exempt from the CCPA because such personal information is regulated by the GLBA and state-level financial privacy laws, such as the California Financial Information Privacy Act.
If the proposed regulations are implemented in their current form, California-regulated insurance companies that meet the thresholds of a “business” will need to extend their CCPA-compliance programs to capture GLBA-regulated consumer personal information. The proposed regulations do not address how such insurance companies would address any conflicts or inconsistencies between existing financial privacy requirements and the CCPA.
Conclusion
The proposed regulations, if approved and implemented in their current form, would impose significant new requirements for businesses regulated by the CCPA, and would introduce several novel consumer choices, including a right to opt out of first-party behavioral advertising and a right to opt out of AI training. The proposed regulations are also likely to raise privacy, cybersecurity, and AI governance standards across sectors that previously were not subject to US requirements to conduct internal risk assessment and cybersecurity audits.
In past CPPA rulemakings, the final form of proposed draft regulations has not changed significantly after advancing to the formal rulemaking process. However, these proposed regulations have triggered division within the CPPA unlike any previous rulemaking, with two of five CPPA board members (including Alastair Mactaggart, an original author of the CCPA) dissenting and calling the proposed regulations “overreach.” On January 31, one of the members that voted in favor of the proposed regulations, Vinhcent Le, was replaced on the CPPA Board by Brandie Nonnecke, a tech policy researcher and Founding Director of the CITRIS Policy Lab at the University of California, Berkeley.
Following the close of the comment period on February 19, the CPPA will review submitted comments and may revise the proposed regulations if needed. Under the California Administrative Procedure Act (APA), if substantial changes are made, an additional 15-day public comment period for the revised text would be required. Barring any major changes, the CPPA could implement the rules as soon as April 1.
[1] This includes the legally required increase to account for the increase in the Consumer Price Index. See Draft Update to Existing Regulations, March 2023, at § 7005(b)(1), available at https://cppa.ca.gov/meetings/materials/20240308_item4_draft_update.pdf
The post California Forges a New Path on Automated Decision-Making Technology, Risk Assessments, and Cybersecurity Audits appeared first on Data, Privacy & Cybersecurity Insights.