New York City Targets AI Use in Hiring: Anti-Bias Law Explained

July 5, 2023, 9:26 AM UTC

New York City employers hoping to use artificial intelligence to make employment decisions must ensure these tools pass an independent audit for bias no more than one year before their implementation, under a newly-enforceable law.

The first-in-the-nation law, which goes into effect Wednesday, attempts to address the growing trend of employers using AI in hiring and promotion decisions. A 2022 survey from the Society for Human Resource Management shows that nearly one in four organizations use AI in some HR capacity, but concerns remain about whether these tools can select and screen candidates without bias.

The tools are used for a range of hiring functions, from parsing resumes to evaluating body language during video interviews. While automated systems have the potential to remove subjective human prejudice in hiring decisions, they often rely on historical data that bakes in institutional biases which could disenfranchise certain protected groups.

In May, the Equal Employment Opportunity Commission released updated guidance clarifying how employers could face liability under Title VII of the Civil Rights Act of 1964 for AI-related bias claims, but the New York law marks a novel attempt to use legislation to protect job applicants.

1. What does the law require?

The New York City Council passed the law in December 2021. The New York Department of Consumer and Worker Protection issued two rounds of proposed rules for the law before releasing the final rule in April.

The law prohibits employers from using automated tools to screen candidates unless the software has undergone an independent review to check for bias against protected groups.

All New York City employers who wish to use AI to hire will also need to submit a summary of their most recent bias audit on their company website.

All job candidates that live within the city must be notified if this software is used during the hiring process. The employer must make available “the type of data collected for the automated employment decision tool, the source of such data and the employer or employment agency’s data retention policy” at a candidate’s request.

2. What does a bias audit entail?

Under the Final Rule, an independent auditor is “capable of exercising objective and impartial judgment” on the bias audit. The rule states that anyone involved in developing the automated tool, or anyone who has a direct financial or employment interest in the company using the AI, cannot be considered independent.

Previously, there has been no set legal framework for the bias audits, which are typically conducted by AI consulting firms. Employers and vendors have been engaging in them at their own discretion.

A bias audit must calculate a tool’s “selection rate” and “impact ratio,” according to the New York City guidelines.

The rules define the selection rate as the rate in which individuals in a protected category are moved forward in the hiring process or assigned a particular classification by the software. The impact ratio is the selection rate for a particular category divided by the selection rate of the most selected category.

The audit then must calculate the impact sex, race, and the intersection of the two have on the automated tool’s decisions.

The law does not set a threshold for passing the audits.

The EEOC has used as a rule of thumb the “four-fifths rule,” which says that if a hiring test has a selection rate of less than 80% for protected groups compared to others, the gap may indicate bias.

3. What data needs to be collected?

The DCWP rule stipulates that the employer must use historical data when running the independent audit. “Historical data” is defined as data collected while an employer was using AI for hiring purposes.

If an employer does not collect data in regards to sex or race, or the employer does not have historical data, the audit may use sample test data. When test data is used, an employer must publicly explain how the data was obtained and why historical data could not be used.

The burden will be on the employer, not the AI tool vendor, to provide proof of an audit. Some vendors, such as HireVue and Harver, have begun conducting the auditing process themselves to help employers comply with the law.

4. What happens next?

Management-side attorneys have said the novel nature of the law and its highly-technical requirements could lead to a decrease in use of automated tools for hiring in New York City as existing auditing firms cope with the increase in demand spurred by the legislation.

Employers found in violation of the law will incur a $500 fine for the first violation and $1,500 for each subsequent violation. There is no private right of action.

The law could also be a blueprint for other jurisdictions across the country as pressure mounts for effective regulation of workplace AI.

Maryland and Illinois recently passed laws that require the candidate’s consent prior to the use of AI in recorded video interviews. Washington D.C. and California have also proposed legislation to police automated hiring tools for discrimination.

To contact the reporter on this story: George Weykamp in Washington at gweykamp@bloombergindustry.com

To contact the editors responsible for this story: Rebekah Mintzer at rmintzer@bloombergindustry.com; Genevieve Douglas at gdouglas@bloomberglaw.com

Learn more about Bloomberg Government or Log In to keep reading:

See Breaking News in Context

Providing news, analysis, data and opportunity insights.

Already a subscriber?

Log in to keep reading or access research tools and resources.