Regulatory

NYC AI Bias Audit: 7 Things You Need to Know About the Updated NYC Algorithmic Hiring Law

Credo AI
Company
January 19, 2023
1/19/2023
Contributor(s):
No items found.

The buzz surrounding New York City's algorithmic hiring law continues to grow every day—particularly after the latest proposed rulemaking from the Department of Consumer and Worker Protection (DCWP) was released in December 2022, which sought to clarify any previous uncertainty. With the enforcement deadline fast approaching, it critical for organizations to understand the requirements of NYC Local Law No. 144, especially its revisions, prior to its enforcement date of  April 15, 2023

The clock is ticking, and we at Credo AI are committed to helping organizations understand what the recent updates and key aspects of the new regulation mean for them. In today’s blog post, we will outline seven things employers and employment agencies need to know about the requirements for AI bias audits. Without further ado, let's get started.

1. What is the new algorithmic hiring law in New York City?

In December 2021, the New York City Council passed Local Law No. 144—which amends the administrative code of the city of New York to address automated employment decision tools. Overall, the law regulates the use of these tools in hiring and promotion decisions for candidates and employees within the city and requires that AI and algorithm-based technologies for recruiting, hiring, or promotion be audited for bias before being used. Put simply, if you are an employer or employment agency in New York City that builds or uses these tools, this law is relevant to you.

2. What constitutes an automated employment decision tool? 

The law specifies an automated employment decision tool (AEDT) is automated if it is used “to substantially assist or replace discretionary decision-making.” The latest rulemaking from the DCWP issued in December has clarified this AEDT definition to mean: 

(i) to rely solely on a simplified output (score, tag, classification, ranking, etc.), with no other factors considered; 

(ii) to use a simplified output as one of a set of criteria where the simplified output is weighted more than any other criterion in the set; or 

(iii) to use a simplified output to overrule conclusions derived from other factors, including human decision-making.
Examples of AEDTs include tools that screen resumes and schedule interviews for job postings, as well as those that score applicants for a "culture fit."

3. What are the compliance requirements for employers or agencies under this law?

There are numerous legal requirements that employers must adhere to when utilizing automated employment decision tools. A few examples are: 

- Have an independent auditor conduct an annual bias audit on the AEDT no more than one year before using the tool.

- Make the summary of the results of the most recent bias audit publicly available on the company website.

- Give notice to the candidates that an AEDT will be used, and provide the details of the characteristics of the AEDT  a minimum of 10 days ahead of using it.

- Allow a candidate to request an alternative selection process or accommodation.

4. What are the Penalties for Non-Compliance with New York City's Algorithmic Hiring Law? 

Non-compliant employers and agencies may face penalties, including a civil penalty of up to $500 for a first violation, with a minimum of $500 and a maximum of $1,500 incurred for each subsequent violation. Each day an automated employment decision tool is in use in violation of the law will generate a separate violation. 

5. What are the updated requirements for the bias audit following the December 2022 Rulemaking?

The December 2022 rulemaking update to New York City's algorithmic hiring law brings necessary clarifications to using automated employment decision tools in hiring and promotion decisions. If you would like to view the complete list of changes, please refer to the official document shared by the New York City Department of Consumer and Worker Protection


In summary, the following are the updates to New York City's algorithmic hiring law:

A. Clarification on the qualifications for independent auditors.

In the context of the law, an independent auditor cannot:
i) have any kind of financial interest (direct or indirect) in the employer using the AEDT or vendor of the AEDT;
ii) cannot have been involved in using, developing, or distributing the AEDT; and
iii) cannot have an employment relationship with the employer using the AEDT or the vendor of the AEDT.

B. Clarification of mathematical calculations of disparate impact. 

The rulemaking clarifies the mathematical calculation used to measure disparate impact, which previously needed to be clearly defined. The impact ratio can be calculated in one of two ways, depending on whether the system selects or scores candidates:

  1. (Selection rate for a category) / (Selection rate of the most selected category)—for example, (Selection rate for Black women) / (Selection rate for White men)
  2. (Scoring rate for a category) / (Scoring rate for the highest scoring category), where  “Scoring Rate” means the rate at which individuals in a category receive a score above the sample’s median score, where the score has been calculated by an AEDT.
C. The explicit addition of intersectional analysis of disparate impact.

Previously, the law did not specify whether organizations needed to evaluate disparate impact across intersectional categories of race/ethnicity and sex, but now the additional rulemaking clarifies that intersectional analysis is required. For example:

Before, an organization could have simply looked at whether an AEDT has a lower selection rate for Black candidates vs. White candidates or female candidates vs. male candidates, but now organizations need to evaluate whether the AEDT has a lower selection rate for Black vs. White candidates and Black female candidates vs. Black male candidates, white female candidates, and White male candidates.

D. Clarification on the use of historical data.

Overall, historical data should be used to conduct the bias audit, but if insufficient historical data is available, then “test data” may be used instead. Here, test data is any data that is not historical data that is used to conduct the audit. While the law does not specify whether this data can or cannot be synthetic or of any other type, it does specify that if test data is used, the bias audit must include a description of why historical data was not available and how the test data was generated.

E. Clarification on the use of the same bias audit. 

According to the update, multiple employers can use the same bias audit if they all use the same tool. Hence, bias audits do not have to be employer-specific, with historical data from only one employer. However, an employer may only rely on a biased audit that contains historical data of other employers if it provided historical data to the auditor during the audit or if it has never used the AEDT.

F. Clarification of AEDT Prohibition.

The new rulemaking makes it more explicit that an employer may only use an AEDT if an NYC Law No. 144-compliant bias audit has been conducted in the last year.

6. When does the new law go into effect?

Enforcement of NYC Law No. 144  was postponed from January 1 to April 15, 2023, giving employers extra time to complete necessary bias audits and ensure full compliance.

7. How can Credo AI support you with the NYC Local Law 144?

Now that you have gained an understanding of the new bias audit requirements under Local Law No. 144 in New York City, it is time to take action. Our AI Governance solutions—Context Driven AI Governance Platform, Credo AI Lens™, and Policy Packs—can help ensure that your organization is prepared to meet the new requirements for your AI-driven employment decision tools. Don't let bias audits hold you back—let Credo AI support you by streamlining your AI Governance process!

In light of the overwhelming number of public comments received, a second public hearing has been scheduled for January 23, 2023. Stay tuned for any upcoming changes and updates.

Don't miss out on the opportunity to see how Credo AI's products can benefit your organization. Request a demo now by emailing us at info@credoai.com.

You may also like

Credo AI Lens™: the ultimate open-source framework for Responsible AI assessments

While ML capabilities have developed at a staggering pace, guidelines and processes to mitigate ML risks have lagged behind. That’s where AI Governance comes in, which defines the policies and processes needed to safeguard AI systems. While there are many components to successfully operationalize the responsible development of AI systems, a chief need is assessing AI systems to evaluate whether they behave adequately for their intended purpose. This assessment challenge is central to Credo AI’s efforts to develop AI Governance tools. In this blog, we introduce Credo AI Lens, our open-source assessment framework built to support the assessment needs of your AI Governance process.

CEO Message: A look at 2022 and a glimpse into 2023

It's been an impact-driven year, and while it is not possible to share all that has happened, I want to take a moment to highlight some industry-shaping product, policy, and ecosystem moments with you. In this year-end review, I am elated to spotlight some of Credo AI’s most significant achievements and reflect on our progress in addressing the challenges and opportunities of the AI industry.

Designing Truly Human-Centered AI

As we enter the era where AI has the potential to impact almost every aspect of our lives, there is a growing need to ensure that AI systems are designed with human values and experiences at their core. This is a high level introduction to Human-Centric AI (HC-AI), a Responsible AI methodology.

Join the movement to make
Responsible Al a reality