fbpx

New York City’s AI Bias Law Fails to Gain Traction Among Employers

Overview of NYC’s AI Bias Regulation Efforts

Despite high hopes for its potential to regulate artificial intelligence (AI) tools in hiring, New York City’s “AI Bias Law” seems to be largely overlooked by employers seven months post-implementation. The law aimed to audit automated employment decision tools (AEDTs) for race and gender biases, requiring the publication of audit results and notification to employees and candidates regarding the use of such tools.

Limited Compliance Observed

A Cornell University study reveals a stark gap between legislative expectations and reality, finding only 18 of 391 employers analyzed complied by posting audit results. This has led experts to question the law’s efficacy, with AI expert Hilke Schellmann criticizing it as “absolutely toothless.”

Narrow Definition and Compliance Hurdles

Critics argue that amendments made during the drafting process diluted the law’s effectiveness, narrowing its scope to AEDTs without human oversight. Guru Sethupathy of AI governance platform FairNow and attorney Amanda Blair of Fisher Phillips highlight the challenges in defining AEDTs and the reluctance of employers to disclose audit results due to potential scrutiny.

The Role of Human Oversight

The law’s limited scope allows many employers to claim exemption, as they assert humans ultimately make the final hiring decisions. However, this does little to address the early-stage use of AI in screening applications, a stage ripe for bias and deserving of audits, according to Schellmann.

Enforcement Challenges

The enforcement mechanism, relying on complaints to the New York City Department of Consumer and Worker Protection, has yet to see any action, partly because job applicants are often unaware AI tools were used in their assessment. This lack of awareness dampens the law’s intended impact.

Also Read:  Can We Curb AI Market Power Before It Happens?

Comparison to Illinois’ AI Video Interview Act

Schellmann draws parallels between New York’s law and Illinois’ Artificial Intelligence Video Interview Act, noting both focus on disclosure rather than compelling employers to alter their AI use. Such regulations may not effectively address the underlying issues of AI bias or discrimination in hiring.

Future of AI Regulation in Employment

Experts predict an increase in AI system regulation, with other states and possibly the European Union setting more comprehensive governance, policies, and monitoring requirements for AI in hiring. The EU AI Act, in particular, is poised to set stringent regulations for high-risk AI applications, including those used in employment.

Conclusion: A Call for More Effective Legislation

As New York City’s AI Bias Law struggles to enforce meaningful compliance among employers, the need for more robust regulation becomes clear. Future legislation, both in the U.S. and abroad, may take a more comprehensive approach to AI governance, focusing on best practices, continuous monitoring, and significant penalties for noncompliance, aiming to better protect job applicants from AI-driven biases.

AI was used to generate part or all of this content - more information