ASAP
California Approves Landmark AI Employment Regulations
At a Glance
- Revisions to Title 2 of the California Code of Regulations will govern the use of AI-based tools in California starting October 1, 2025.
- Among other things, the regulations define the scope of AI-driven (and other) automated decision-making systems (ADS), clarify what constitutes discriminatory use of ADS, require anti-bias testing of ADS, impose new recordkeeping requirements, and discuss affirmative defenses to employer liability.
- The final regulations are less burdensome than the original draft, but still impose several new compliance requirements for employers.
On June 30, 2025, the California Civil Rights Council (“CRC” or “Council”) secured final approval for revisions to Title 2 of the California Code of Regulations, which governs administration of the California Civil Rights Department (CRD). These regulations interpret California’s Fair Employment and Housing Act’s (FEHA) prohibitions against discrimination in recruitment, hiring, promotion, training and termination, specifically inserting requirements and expectations when using “artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing” to facilitate human decision-making. These revisions take effect on October 1, 2025, hence creating some urgent compliance needs for employers using AI-based tools in the Golden State.
In summary, the Regulations:
- define the scope of automated decision-making systems (ADS) and other key terms;
- provide details on the prohibition of discriminatory use of ADS;
- require anti-bias testing of ADS;
- implement recordkeeping requirements;
- raise the bar for employer, agent, vendor and employment agency liability;
- define a key affirmative defense; and
- outline the diligence employers can conduct to establish this defense.
More specifically, the approved regulations:
- no longer create joint-and-several liability (or aiding-and-abetting liability) for intermediate entities in the AI supply chain, such as retailers, advertisers, designers, technical service providers, and others providing services related to the use of AI tools for employment decision-making;
- no longer mandate retention of all data used to train the machine-learning AI model underlying an ADS, all data inputs to the ADS, and all of the ADS’s outputs;
- no longer presume that personality-based questionnaires and puzzle-based or otherwise gamified assessments are impermissible pre-offer medical/psychological examinations;
- require a tighter nexus between the use of an ADS and the alleged harm before liability can be imposed; and
- no longer shift a key evidentiary burden previously placed on employers—proving the absence of a less-discriminatory alternative.
Over three-plus years, employers have watched these revisions with concern because of the significant new regulatory obligations proposed with respect to AI-based ADS tools. In final form, however, certain burdensome aspects of the previous proposal have been eliminated or replaced with more commonsense, feasible options.
While some changes from the previous draft, such as a narrower data-retention requirement, are more business-friendly, other compliance requirements add new burdens, such as coverage of Generative AI as well as traditional Predictive AI, and focus on evidence of pre-use testing and risk-mitigation efforts by employers. At a 30,000-foot level, the pains taken to work AI, machine-learning, and automated decision-making into the existing regulations signal the Council’s intent to scrutinize AI use in hiring and beyond. At the same time, adding evidentiary guidelines should help employers to understand the due diligence they can undertake to avoid running afoul of such scrutiny.
A Brief Recap of California’s FEHA + AI Proposal History
The California Civil Rights Council released draft revisions to the California Code of Regulations on March 15, 2022, focused on expanding the obligations and potential liability of employers and third-party vendors that use, sell, or administer employment-screening tools or services incorporating artificial intelligence and other data-driven statistical processes to automate decision-making. Following that initial proposal, two judicial developments further defined the landscape of California’s artificial intelligence laws.
On March 16, 2022, the U.S. Court of Appeals for the Ninth Circuit certified to the Supreme Court of California the question of whether the California Fair Employment and Housing Act’s definition of “employer,” which already included “any person acting as an agent of an employer,” permits a business entity acting as an agent to be held directly liable for employment discrimination. On August 21, 2023, the California Supreme Court answered this question in the affirmative in Raines v. U.S. Healthworks Medical Group,1 holding that an employer’s business entity “agents” may be considered “employers” when interpreting the FEHA. Raines further held that any such agent may be held directly liable for employment discrimination in violation of the FEHA if it has at least five employees and “carries out FEHA-regulated activities on behalf of an employer.”
Looking Forward to California’s Revised Regulations, Effective October 1, 2025
On October 4, 2024, the Council released additional updates to its proposed revisions in response to public feedback. The October revisions generally tempered the original proposal, which was perceived as swaying against employer interests. The revised proposal added clarity, specifically with regard to employer responsibilities in their use of AI tools. On June 30, 2025, the Council announced that it secured final approval of the October 4, 2024 revisions.
Employers should note these key takeaways before the new law’s effective date of October 1, 2025:
Improper Burden-Shifting Amended: The single most pressing issue with the Council’s initial draft was its burden-shifting of the job-relatedness/business necessity defense. In its initial draft, the Council shifted the burden of proving the absence of a less-discriminatory alternative to the employer.2 This appeared to violate other principles of California law.3 The Council’s latest revisions generally remove that language, replacing it with “subject to any available defense” except in § 11072.4
Joint-and-Several Liability Scaled Back Dramatically: Another major critique the Council has addressed is the prior draft’s extensive imposition of joint-and-several liability. Four previous instances of this have now been edited:
- § 11020 no longer extends aiding & abetting liability to developers, designers, advertisers, sellers, or other mere providers of ADS tools.5
- The definition of “agent” in § 11008(b), which previously included all those who “provided services related to making hiring or employment decisions” has now been scaled back to those who “exercise a function traditionally exercised by the employer,” thus limiting the scope to active participants in the hiring process.
- “Employment agency” is now defined in § 11008(g) as those “undertaking, for compensation, the procurement of job applicants . . . including persons undertaking those services through the use of an automated-decision system.”
- The record-retention requirement in § 11013 no longer requires data retention by “any person who sells or provides” an ADS or other selection criteria.
Overall Liability Causation Tightened: The showing required for liability to be imposed on an employer has been raised:
- from “related to” use of an ADS to “resulted, in whole or in part, from” use of an ADS (§ 11028);
- from “involved” with use of an ADS to “relied, in whole or in part, on” use of an ADS (§ 11017.1);
- the overall definition of “adverse impact” now requires a “substantial disparity” in selection rates to constitute discrimination (§ 11008(a));
- the revised definition of “proxy” now requires the variable to be not just correlated but “closely correlated” to a protected characteristic.
ADA-Focused Changes Narrowed: The prior revisions had several provisions focused on disability discrimination, including most prominently § 11071(e), which deemed personality-based questions a type of prohibited pre-offer medical test. The regulation now makes such questions impermissible only if they are “likely to elicit information about a disability.”
Similarly, the prior proposal to scrutinize all measures of skill/dexterity/reaction time and related attributes (§ 11016(c)) now states that an employer may need to provide reasonable accommodation for any identified disability. The same clarification regarding reasonable accommodation has also been added to § 11016(d), which scrutinizes ADS functions that analyze tone of voice, facial expression, or other physical characteristics.
Fully ADS-Driven Background Checks Permitted: Previously, an all-ADS background check was prohibited.6 That restriction has been stricken, now allowing all-ADS background checks to take place legally.
Record Retention Requirements (Potentially) Reduced: The § 11013 retention requirements continue to extend four years, but the applicable ADS Data definition (§ 11008.1(d)) has been scaled back significantly. While the prior draft included all data used to train the machine learning underlying an ADS, all data inputs to the ADS, and all of the ADS’s outputs, the new definition appears to be limited to “any data used in or resulting from” the use of an ADS” (i.e., strictly inputs and outputs), and “any data used to design develop or customize an [ADS] for use by a particular employer or other covered entity” (emphasis added). Given the sweeping language it replaces, the intent of this set of revisions appears to reduce the employer’s retention burden.
Pre-use Due Diligence Criteria Identified: Finally, the revisions establish that evidence of an employer’s due diligence prior to (and after) adopting an ADS is significant in assessing liability. In several key sections, starting with § 11009, the revisions state that “evidence, or lack of evidence, of anti-bias testing or similar proactive efforts” to prevent discrimination is relevant to analysis of a claim or defense under the law. The revisions identify six relevant aspects of such testing:
- the quality of the testing efforts;
- the efficacy of the tests;
- the recency of the testing;
- the scope of the tests;
- what the test results reveal about ADS outcomes; and
- whether and how the employer responded to the results and their implications.
This approach, while deemed neither necessary nor sufficient by the revisions, aligns California’s stance with recent laws in New York City, Colorado, and the European Union, all of which require employers and other “deployers” of AI tools to be responsible, to some extent, for pre-use (and ongoing) risk assessment and diligence.
Despite the detailed regulations, some aspects of the new law remain murky, and will require further analysis as courts (and the CRC) begin to interpret the new regulations:
Definition of “ADS” Remains Broad: The term “ADS” continues to include systems “derived from and/or use artificial intelligence, machine-learning algorithms, statistics, and/or other data processing techniques,” and the analysis of “employee or applicant data” by a third-party.7 As currently phrased, static questionnaires that happen to be “computer-based” would fall within a literal interpretation of the “ADS” umbrella, despite seeming not to raise the AI bias concerns that underly state regulation of this area.
Generative AI Now Included in Definition of “ADS”: The new definition of ADS has been revised to include “content” that is “generate[d]” from inputs to the system. This addition does not define the term “generated,” which necessarily leads to speculation as to the extent of employer responsibilities in monitoring their or their ADS vendors’ use of generative AI.
New, Undefined Standards: Some new language in the regulations lacks specific definition, including most prominently the terms “substantial disparity”8 and “closely correlated.”9
While these newly minted additions to the California Code of Regulations create new burdens for employers, they also meaningfully improve upon the previously circulated proposals and have the net effect of reducing potential employer liability for ADS use. What remains certain is that the Golden State intends to hold both developers who create automated decision-making tools and employers who deploy said tools responsible for resulting adverse consequences, intended or not.