October 2025 California Anti-Discrimination Regulations Addressing the Impact of AI on Employment-Related Hiring and Management Decisions

Starting October 1, 2025, California will implement new regulations designed to prevent employment discrimination concerning the use of artificial intelligence (AI), algorithms, and other automated decision-making systems (the “AI Regulations”). The AI Regulations can be found within the California Code of Regulations, Title 2, Sections 11008 through 11079.  The AI Regulations were approved on June 27, 2025, by the California Civil Rights Council, an internal branch of the Civil Rights Division (CRD). The AI Regulations clarify how the Fair Employment and Housing Act’s (FEHA) long-standing antidiscrimination laws apply to modern hiring and employment practices, especially as AI-driven tools become more common for screening, evaluating, and making personnel decisions.

The Context for AI Regulations:

The context for the AI Regulations reflects the many competing challenges faced by both employers and workers alike when considering the unprecedented opportunities and risks posed by AI. Even AI companies seem to struggle with how to market themselves. Some advertise AI as a way to streamline the workforce through job replacement while others tout AI for job growth and attendant performance improvement efficiencies.  The juxtaposition of these challenges is aptly explained in the CRD’s June 30, 2025, press release:

“While these [AI] tools can bring myriad benefits, they can also exacerbate existing biases and contribute to discriminatory outcomes. Whether it is a hiring tool that rejects women applicants by mimicking the existing features of a company’s male-dominated workforce or a job advertisement delivery system that reinforces gender and racial stereotypes by directing cashier ads to women and taxi jobs to Black workers, there are numerous challenges that may arise with the use of artificial intelligence in the workplace.”

For these reasons, the AI Regulations prohibit employers from using automated systems that may inadvertently discriminate against candidates or employees based on protected characteristics such as race, gender, or disability. The definition of “automated-decision systems” (ADS) is broad, covering not only fully autonomous AI but also technologies that assist humans in employment decisions, including resume filters, targeted job ads, and algorithmic assessment tools which may result in unintended discrimination through functions (referred to in the AI Regulations as “proxies”) such as ZIP code search parameters, speech patterns and facial analysis, among others.

Here are some examples of common proxies and their inherent risks:

  • Zip Code: AI hiring systems may use ZIP codes as factors in screening candidates, but these can serve as proxies for race or socioeconomic status due to patterns of housing segregation. For instance, excluding applicants from certain ZIP codes could unfairly eliminate people from lower income or minority communities, resulting in racial or ethnic discrimination even if the tool never specifically considers race or ethnicity criteria.
  • Speech Patterns: AI video interview platforms and assessment tools often analyze speech tone, fluency, or accent. These features can proxy for national origin, race, or disability. For example, an AI system may rate non-native English speakers or individuals with speech impairments lower, unfairly disadvantaging them compared to native speakers or those without disabilities.
  • Facial Analysis: AI systems that evaluate facial expressions have been criticized for potentially enabling racial, gender, or disability bias, leading to candidates being unfairly scored or eliminated from consideration.

These practices illustrate how, even without intent, reliance on AI-based employment screening and evaluation criteria can lead to employer liability for discrimination.

Non-Delegable Duties and Joint Employment Implications of Using AI Vendors:

Another critical aspect of the AI Regulations is that indemnification agreements with AI vendors will not protect employers from direct liability for violations.  Specifically, these regulations  expand the definition of a responsible “employer” to include an “agent” which is defined as “any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity,” such as recruiting, screening, hiring, promoting or making other employment-related decisions where ADS is used. In practice, this means employers are  directly responsible for the actions of their agents, including for example, recruiters, staffing firms, or other outside vendors when they use AI tools on their behalf—even if those vendors are independent from the organization. This means that liability for discrimination can attach wherever an agent misuses (or misapplies) AI in hiring, promotions, compensation, or other personnel decisions on behalf of the employer.

As a result, an employer cannot transfer its fundamental obligation to prevent discrimination; this is a non-delegable duty under FEHA, meaning the employer remains responsible for ensuring legal compliance no matter how many safeguards or indemnification clauses are written into the vendor agreement. Also, by delegating hiring or other personnel functions to an AI vendor, the employer creates a joint employment risk—courts may treat both the employer and the vendor as “co-employers,” exposing both parties to direct liability for any discriminatory outcomes produced by the automated system.

Choosing well-insured vendors with proven track records and making sure to include solid indemnification provisions in any vendor agreements remain critical parts of the due diligence process companies should follow when choosing any vendors, especially ones in the AI / employment arena.  It is obviously better to choose a vendor who takes financial responsibility for its actions if or when a claim or lawsuit may arise because of the vendor’s AI systems. But those types of contractual protections will not absolve a company that hires any vendor from direct liability for the vendor’s acts or omissions that lead to violation(s) of the AI Regulations.

Recordkeeping:

Employers must retain relevant employment and automated decision records for at least four years.

Recommended Risk Management and Mitigation Steps:

The AI Regulations are much like the evolution of AI itself: dynamic and hard to follow.  But the most important takeaway and immediate action items are relatively simple.  Employers must take the time to periodically review all AI and algorithmic tools either used internally or with outside vendors. The overarching goal must be to test for bias. Some examples, like zip code search limitations, may be fairly obvious. However, as AI systems continue to rapidly evolve, employers will need to engage in diligent, ongoing efforts to try and keep up with technological advancements to avoid chasing the proverbial bouncing ball of AI-related anti-discrimination obligations.

Recommended immediate actions for any employer or prospective employer to take include:

  • Review your company’s technology and software systems concerning AI and ADS tools.
  • Identify any ADS or algorithmic tools used in recruiting, hiring, performance evaluations, or workforce management.
  • Choose vendors wisely by engaging in appropriate due diligence, including insurance analysis and reference checks. Consider consulting with AI-bias experts to assist in any vendor selection process. Remember you only know what you know which is all the more reason to consult with experts.
  • Make sure vendor agreements include solid indemnity provisions and detailed descriptions of all services being provided (while remembering that those contractual provisions will not absolve employers of direct liability for potential AI Regulations violations).

Conclusion:

Regardless of your overall view on California’s business and employment culture and legislative scheme, one thing is certain: California remains on the forefront of technological advancement with AI taking top bill for the foreseeable future. Consistent with this history, California is among the first states to adopt comprehensive standards specifically addressing the potential risks associated with AI in the workplace, striving for a fairer and more transparent recruitment and employment process.

Scherer Smith & Kenny LLP remains available to assist you with these and any other employment law-related questions you may have.  For additional information, please contact Denis Kenny at denis@sfcounsel.com.

– Written by Denis S. Kenny