In a precedent-setting decision, the California Supreme Court has ruled that vendors utilizing artificial intelligence (AI) for employment-related functions can now be directly held accountable for discriminatory practices under state law.
This judgement introduces a sea change in legal liability in the state, opening up the doors to holding third-party platforms accountable. As Randy Erlewine, a noted attorney at the San Francisco-based Phillips, Erlewine, Given & Carlin law firm, highlighted, “Everyone is being hired these days online, and you’ve got these major third-party platforms that may have discriminatory biases or metrics, and the employer may not know it is discriminatory.”
The Raines Case
Central to this ruling was a class action lawsuit, wherein Kristina Raines took U.S. Healthworks Medical Group to court. This company, which contracts with various employers to medically screen job applicants, presented Raines with a health history survey packed with questions she deemed intrusive. Consequently, after declining to answer the survey, her job offer was retracted.
The lawsuit’s trajectory took a twist when the Ninth Circuit court sought the high court’s opinion on whether California’s Fair Housing and Employment Act (FEHA) allows for “business entity acting as an agent of an employer to be held directly liable for employment discrimination.” The unanimous decision on Aug. 21 was in the affirmative.
Justice Martin Jenkins, penning the judgment, clarified that such a business entity could only be held directly responsible under FEHA when executing activities on behalf of an employer that fall within the purview of FEHA.
Industry Repercussions
Amanda Herron, an attorney with Gordon Rees Scully Manusukhani, spotlighted the potential ramifications of this decision across various sectors, ranging from HR to consulting. As she pointed out, “We can’t say for sure what the impact is going to be yet, but it seems like it will double the amount of defendants who can be named in these FEHA suits.”
Previously, the court’s trend had been not to hold individuals representing their employers responsible for FEHA discrimination. This new ruling, however, distinguishes business entities, specifically those with a workforce of five or more, as possessing a bargaining power more on par with the employers.
Such a shift has the potential to ripple nationwide. Business agents, when collaborating with Californian firms, would now find themselves governed by this law, irrespective of the employee demographics they cater to. Niloy Ray, a shareholder at Littler Mendelson, emphasized, “This is going to be one more impediment when it comes to doing business in California.”
Future Landscape
The litigation cost, given the expanded pool of potential defendants, is projected to rise. Ray elaborates on the complexities, saying, “Now you have multiple entities and each has their own insurance company, and it becomes more complicated to figure out who will be ultimately responsible for those costs.”
This pivotal judgment arrives amid ongoing debates at the state level regarding the governmental role in overseeing automated tools influencing sectors from employment to insurance. Notably, the California Civil Rights Council is actively sculpting regulations concerning the application of AI and machine learning in employment-related decisions.
Reflecting on the broader picture, Ray interprets the Raines verdict as an indicator that both AI vendors and their partnering firms will jointly bear the responsibility for any discriminatory lapses. He aptly captures the evolving scenario, stating, “Think of this liability as a wall that’s being built. Many of the bricks are already in place because of the proposed revisions, and this just adds more bricks to the wall.”