Local Law No. 144

NYC AI Bias Audit: 7 Things You Need to Know About the Updated NYC Algorithmic Hiring Law

Credo AI
Company
January 19, 2023
1/19/2023
Contributor(s):
No items found.

The buzz surrounding New York City's algorithmic hiring law continues to grow every day—particularly after the latest proposed rulemaking from the Department of Consumer and Worker Protection (DCWP) was released in December 2022, which sought to clarify any previous uncertainty. With the enforcement deadline fast approaching, it critical for organizations to understand the requirements of NYC Local Law No. 144, especially its revisions, prior to its enforcement date of  April 15, 2023

The clock is ticking, and we at Credo AI are committed to helping organizations understand what the recent updates and key aspects of the new regulation mean for them. In today’s blog post, we will outline seven things employers and employment agencies need to know about the requirements for AI bias audits. Without further ado, let's get started.

1. What is the new algorithmic hiring law in New York City?

In December 2021, the New York City Council passed Local Law No. 144—which amends the administrative code of the city of New York to address automated employment decision tools. Overall, the law regulates the use of these tools in hiring and promotion decisions for candidates and employees within the city and requires that AI and algorithm-based technologies for recruiting, hiring, or promotion be audited for bias before being used. Put simply, if you are an employer or employment agency in New York City that builds or uses these tools, this law is relevant to you.

2. What constitutes an automated employment decision tool? 

The law specifies an automated employment decision tool (AEDT) is automated if it is used “to substantially assist or replace discretionary decision-making.” The latest rulemaking from the DCWP issued in December has clarified this AEDT definition to mean: 

(i) to rely solely on a simplified output (score, tag, classification, ranking, etc.), with no other factors considered; 

(ii) to use a simplified output as one of a set of criteria where the simplified output is weighted more than any other criterion in the set; or 

(iii) to use a simplified output to overrule conclusions derived from other factors, including human decision-making.
Examples of AEDTs include tools that screen resumes and schedule interviews for job postings, as well as those that score applicants for a "culture fit."

3. What are the compliance requirements for employers or agencies under this law?

There are numerous legal requirements that employers must adhere to when utilizing automated employment decision tools. A few examples are: 

- Have an independent auditor conduct an annual bias audit on the AEDT no more than one year before using the tool.

- Make the summary of the results of the most recent bias audit publicly available on the company website.

- Give notice to the candidates that an AEDT will be used, and provide the details of the characteristics of the AEDT  a minimum of 10 days ahead of using it.

- Allow a candidate to request an alternative selection process or accommodation.

4. What are the Penalties for Non-Compliance with New York City's Algorithmic Hiring Law? 

Non-compliant employers and agencies may face penalties, including a civil penalty of up to $500 for a first violation, with a minimum of $500 and a maximum of $1,500 incurred for each subsequent violation. Each day an automated employment decision tool is in use in violation of the law will generate a separate violation. 

5. What are the updated requirements for the bias audit following the December 2022 Rulemaking?

The December 2022 rulemaking update to New York City's algorithmic hiring law brings necessary clarifications to using automated employment decision tools in hiring and promotion decisions. If you would like to view the complete list of changes, please refer to the official document shared by the New York City Department of Consumer and Worker Protection


In summary, the following are the updates to New York City's algorithmic hiring law:

A. Clarification on the qualifications for independent auditors.

In the context of the law, an independent auditor cannot:
i) have any kind of financial interest (direct or indirect) in the employer using the AEDT or vendor of the AEDT;
ii) cannot have been involved in using, developing, or distributing the AEDT; and
iii) cannot have an employment relationship with the employer using the AEDT or the vendor of the AEDT.

B. Clarification of mathematical calculations of disparate impact. 

The rulemaking clarifies the mathematical calculation used to measure disparate impact, which previously needed to be clearly defined. The impact ratio can be calculated in one of two ways, depending on whether the system selects or scores candidates:

  1. (Selection rate for a category) / (Selection rate of the most selected category)—for example, (Selection rate for Black women) / (Selection rate for White men)
  2. (Scoring rate for a category) / (Scoring rate for the highest scoring category), where  “Scoring Rate” means the rate at which individuals in a category receive a score above the sample’s median score, where the score has been calculated by an AEDT.
C. The explicit addition of intersectional analysis of disparate impact.

Previously, the law did not specify whether organizations needed to evaluate disparate impact across intersectional categories of race/ethnicity and sex, but now the additional rulemaking clarifies that intersectional analysis is required. For example:

Before, an organization could have simply looked at whether an AEDT has a lower selection rate for Black candidates vs. White candidates or female candidates vs. male candidates, but now organizations need to evaluate whether the AEDT has a lower selection rate for Black vs. White candidates and Black female candidates vs. Black male candidates, white female candidates, and White male candidates.

D. Clarification on the use of historical data.

Overall, historical data should be used to conduct the bias audit, but if insufficient historical data is available, then “test data” may be used instead. Here, test data is any data that is not historical data that is used to conduct the audit. While the law does not specify whether this data can or cannot be synthetic or of any other type, it does specify that if test data is used, the bias audit must include a description of why historical data was not available and how the test data was generated.

E. Clarification on the use of the same bias audit. 

According to the update, multiple employers can use the same bias audit if they all use the same tool. Hence, bias audits do not have to be employer-specific, with historical data from only one employer. However, an employer may only rely on a biased audit that contains historical data of other employers if it provided historical data to the auditor during the audit or if it has never used the AEDT.

F. Clarification of AEDT Prohibition.

The new rulemaking makes it more explicit that an employer may only use an AEDT if an NYC Law No. 144-compliant bias audit has been conducted in the last year.

6. When does the new law go into effect?

Enforcement of NYC Law No. 144  was postponed from January 1 to April 15, 2023, giving employers extra time to complete necessary bias audits and ensure full compliance.

7. How can Credo AI support you with the NYC Local Law 144?

Now that you have gained an understanding of the new bias audit requirements under Local Law No. 144 in New York City, it is time to take action. Our AI Governance solutions—Context Driven AI Governance Platform, Credo AI Lens™, and Policy Packs—can help ensure that your organization is prepared to meet the new requirements for your AI-driven employment decision tools. Don't let bias audits hold you back—let Credo AI support you by streamlining your AI Governance process!

In light of the overwhelming number of public comments received, a second public hearing has been scheduled for January 23, 2023. Stay tuned for any upcoming changes and updates.

Don't miss out on the opportunity to see how Credo AI's products can benefit your organization. Request a demo now by emailing us at demo@credo.ai.

You may also like

🔥 Unlocking AI's Full Potential: Playbooks for Organizational Change Management in Responsible AI

Leveraging the capabilities of Artificial Intelligence (AI), including Generative AI, has become a key focus for executive teams across various industries worldwide. With tools such as ChatGPT 4 or DALL·E 2, AI is no longer a technology reserved for a select few in the future. Instead, it is here today, offering great benefits to both individuals and businesses. By streamlining operations, improving customer experiences, and boosting productivity, AI can provide a remarkable competitive edge to companies.However, just like with any rapidly advancing new technology, there are significant challenges that need to be addressed. Concerns such as AI bias, misuse, and misalignment have caused customer outrage and prompted companies to implement stricter guidelines for the development and use of AI.Moving fast with AI is critical.But breaking things can lead to more problems than benefits. At Credo AI, one of our missions is to empower organizations to harness the full potential of AI with intention, so they can reap the benefits without compromising speed. This means supporting organizations to move fast without breaking things.To support organizations in this endeavor, we have created the ultimate resource for effective, Responsible AI change management: our Responsible AI Playbooks (5 of many more to come! 😉). Regardless of the stage of AI maturity, our comprehensive guide provides practical guidance for organizations and employees to adopt and implement Responsible AI with ease, allowing them to quickly leverage and unlock the benefits of AI without risking any potential damage in the process. Are you ready to realize the full potential of AI? We are! Let’s start!

Growing Pressure to Regulate AI: Proposed State Bills Call for Impact Assessments and Transparency

In recent weeks, there has been a significant increase in the number of AI-related state bills introduced across the United States.This reflects greater pressures to address AI and automated decision-making systems used in the government and private sector – and the potential risks they present. States have taken different approaches to fill the current gaps in regulation, including the development of task forces and the allocation of funding for research. Additionally, a number of bills have proposed measures aimed at increasing transparency around AI systems, including requirements for algorithmic impact assessments and registries/inventories of AI systems used. These transparency measures are growing in popularity as a regulatory tool to ensure that AI systems are trustworthy and safe, affecting developers and deployers of AI products, both in the private and public sectors.

Tools for Transparency: What Makes a Trustworthy AI System? | MozFest 2023

In recent years, the development and use of artificial intelligence (AI) systems have skyrocketed, leading to an urgent need for accountability and transparency in the industry. To shed light on this topic, Ehrik Aldana, Tech Policy Product Manager at Credo AI, was invited to give a lightning talk at Mozilla Fest 2023.

Join the movement to make
Responsible Al a reality