Regulatory

NYC Bias Audit Law: Clock ticking for Employers and HR Talent Technology Vendors

Merve Hickok
Founder @Alethicist.org
August 15, 2022
8/15/2022
Contributor(s):
No items found.

Watch our recent webinar on this topic here.

On January 1, 2023, the New York City (NYC) Local Law 144, aka NYC bias audit law for automated employment decision tools, will go into effect. With only a few months left for organizations to be compliant, it is a good time to discuss the impact of this legislation and highlight the areas for improvement as the legislation starts to mature.

The NYC legislation is one of the first examples in the world of a jurisdiction requiring algorithmic audits on tools used by employers for employment decisions. As I sit at a unique intersection of being an HR professional with 20 years of experience, an AI ethicist and AI policy researcher, I share the following as a reference to those interested in the scope and impact of this legislation and provide some recommendations for its future evolution.

Scope

NYC Local Law 144 requires employers which are using automated employment decision tools (AEDT) for employment or promotion decisions within NYC to ensure every such tool that substantially assists or replaces a discretionary decision undergoes an annual bias audit by an independent auditor. The scope of the audit shall include but not be limited to the testing of an automated employment decision tool to assess the tool’s disparate impact on persons with regards to sex, race, and ethnicity. 

If an employer is using an AEDT in a capacity to narrow what we call the hiring funnel/pipeline, it would require the bias audit. Also, just because an employer has built its own tool does not exclude it from the scope of the law.

Along with the audit, the law also requires the employers to have certain documents and practices in place. These are:

  • Publicly making available the summary of the results of the most recent bias audit on their website,
  • Giving notice to the candidates that an automated employment decision tool will be used and provide the details of characteristics of the tool minimum 10 days ahead of the use,
  • Allowing a candidate to request an alternative selection process or accommodation
Areas for improvement

While there are multiple benefits, there are also significant gaps and areas for improvement with this legislation, voiced both by me and several researchers and advocates.

Definitions: The text of the legislation does not define the most important terms which have significant impact on the intended results and benefits of the legislation — such as bias, audit criteria, minimum requirements for impartiality or independence of the auditor. Guidance on such terms would have also clarified whether an internal auditor could conduct the audit. Such vagueness makes it extremely vital for the scope and rules of audit to be developed independent of the auditor conducting the audit to ensure integrity and remove/reduce conflict of interest.

Other Protected Categories: While the law requires the audit to focus bias regarding race, ethnicity and gender, it significantly lacks any mention of bias audit for other categories also protected by federal legislation such as Disability (ADA), Age (ADEA), Sexual Preference (Title VII). Even though these categories are also included in NYC’s own anti-discrimination code, they are excluded from the audit requirements. Especially in the case of people with disabilities, EEOC recently published a technical assistance document detailing how AEDTs may violate existing requirements under Title I of ADA and provided tips to employers on how to comply. As such, a bias audit should be inclusive of these categories as well.

Extent of bias audit: As per this legislation, a bias audit shall include but not be limited to the testing tool’s disparate impact. In the absence of a definition of bias, the audits can turn into a simple check of disparate impact by using 4/5th Rule. This is a race to the bottom and can turn the requirements and spirit of this legislation into a rubber stamp activity. Such narrow conceptualization of bias misses the opportunity to audit several other important factors in design and implementation of employment tools. These may include data provenance and quality (in terms of representation, recency, completeness), design decisions and motivations embedded in the tools (which might obscure discriminatory motivations behind algorithmic opacity), biases which might leak into the system due to the architecture of the tool (such as automation bias), and use of other statistical tests.

Notice to candidates: Employers are required to provide candidates minimum 10-day notice about the future use of AEDT and include the specific job qualifications and characteristics the tool will use in determining its outcome. This is to allow a candidate to request an alternative selection process or accommodation. A template or a guidance document establishing the minimum content of such Notice is crucial and necessary. Otherwise, the content can be very vague and high-level, or too long and full of legal and technical terms for it to be beneficial for the candidate.

Audit reports: The legislation, interestingly enough, does not prescribe a Pass/Fail result for an audit. It also does not provide any guidance on the minimum details which should be included in the audit summary report. A template or a guidance document establishing the minimum content of such audit report summary is crucial and necessary. Otherwise, such documents can be only a few sentences long and not provide any quantitative or qualitative information to be beneficial for the candidate or public interest.

The development of this legislation should have included a robust public consultation and feedback process. Such consultation would have allowed for further clarification and improvement on the scope and concepts of the legislation. The legislation could have also required an agency rule-making. NYC Council has not required such rule-making and has not published any further guidance since the publication of the law in December 2021.

Impact on candidates

As mentioned, the law requires the employers notify the candidates about the use of these tools and provide the candidates a choice to request an alternative method, if they choose to do so. The notice is still beneficial for the candidate to have a better understanding of the process. However, a notice does not remove the power imbalances between a candidate and an employer. Although the NYC law requires notice, it does not provide clarity on what happens when a candidate requests an alternate selection process or accommodation, and the employer chooses not to respond. Under ADA, an individual with disability can request a modification to a process, task, or environment to have an equal opportunity to get a job. Accommodations are considered “reasonable” if they do not create an undue hardship to the employer. NYC legislation expands the opportunity to request alternative selection or accommodation to all candidates, not just those with disabilities, but then falls short of creating any transparency or redress mechanisms for candidates.

The public disclosure of the audit results can help candidates to have a high-level understanding of the ethical or responsible practices of their potential future employers and how they value candidate experience. Such disclosures, and obviously the results of the audit, can also help candidates to flag concerns if they think they have been assessed in a possibly biased way, and contest decisions as necessary.

However, as detailed above, the shortcomings of the scope of audit (limited to bias and limited to only race, ethnicity, and gender categories) and vagueness of concepts used can limit the intended results and spirit of the legislation — namely protect the candidates in NYC and create accountability for employment decisions and practices. Even for the narrow categories included in the audit, gaming of the system and race to the bottom for simple, rubber-stamp audits might result in false assurances to candidates.

Impact on employers

Hiring and promotion decisions have direct consequences for the diversity, success, profitability, and resiliency of an organization. AEDTs are deployed by employers to reach more diverse talent pools, while making the process faster and more efficient. However, not all tools or processes are created equal. Some might result in more biased decisions than the human-only decisions the employer is trying to improve upon or replace. Certain biased or pseudo-scientific tools might in fact result in more homogeneous hiring and promotion decisions and exclude people or reasons not related to performance or success in a role. Therefore, there is benefit for employers to ensure they conduct their due diligence for the tools they use — for the candidate experience, for better hiring, inclusive culture and for the success and sustainability of their organizations.

In terms of impact of Law 144 on employers, they need to first figure out what tools they use might be subject to this law. Once equipped with their inventory, employers need to start conversations and coordinate with their vendors and their internal teams on how to prepare for audit and then have the bias audit conducted. The law states “it shall be unlawful for an employer or an employment agency to use AEDT to screen a candidate or employee for an employment decision unless such tool has been the subject of a bias audit conducted no more than one year prior to the use of such tool.” In other words, if an employer has already been using an AEDT, the bias audit should be conducted by January 1, 2023. If an employer is considering developing or procuring an AEDT, the bias audit requirement creates an opportunity to be more diligent about the implications of such tools on the candidates and the company. Either way, it is crucial to start the work now of assessing the current situation with the AEDT and the gap towards compliance with the law. In terms of responsibility within the organization, it is best practice to ensure a multidisciplinary team is involved in both the procurement due diligence and the audit process. HR Strategy, Talent Management and HR Technology partners are a must, along with Compliance, Legal, Data Science and Information Security teams. Focusing on responsible AI from procurement stage and having a robust governance mechanism already takes you most of the way to meet this law, be future proof for other legislation and differentiate yourself as a responsible employer.

Employers also need to think about alternative selection processes and the impact the 10-day notice period might have on their operations. This would ultimately mean there will always be a minimum 10-day gap between notification to candidate and the use of the AEDT. Since the notification is required for every AEDT subject to the legislation, employers should consider detailed notifications on the whole process rather than step-by-step to keep the time impact to minimum.

The legislation also introduces several penalties for breach of the requirements of Local Law 144:

  • Civil penalty of not more than $500 for a first violation and each additional violation occurring on the same day as the first violation, and not less than $500 nor more than $1,500 for each subsequent violation
  • Each day on which an automated employment decision tool is used in violation of this section shall give rise to a separate violation
  • Failure to provide any notice to a candidate or an employee shall constitute a separate violation
Impact on vendors

The law puts the responsibility on the employers, but this audit cannot be done without collaboration with vendors. There is necessity and benefit to both vendors and employers to work together. Vendors will need to coordinate with employers and auditor with regards to access to data and model, on the characteristics of the tool and obviously with any mitigation steps necessary. Vendors have huge interest in making sure their tools are audited to high quality so they can differentiate their products, protect their clients, and possibly increase their market share. In certain cases, the vendor might have tailored or trained a model specifically for a client. In such cases, the vendor would need to coordinate for separate audits for each tailored model with the impacted clients.

Separately, vendors have an additional interest in ensuring their products are accurately described in the candidate notifications and publicly disclosed audit results. In 2021, Federal Trade Commission (FTC) reminded companies of its jurisdiction and enforcement powers as they relate to AI systems. In summary, the “FTC Act prohibits unfair or deceptive practices which would include the sale or use of racially biased algorithms; Fair Credit Reporting Act ensures an algorithm for employment, housing, credit, insurance, or other benefits does not deny people opportunities unfairly and Equal Credit Opportunity Act makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.” FTC concluded this public post with Hold yourself accountable — or be ready for the FTC to do it for you.”

Looking to the future

Employers , vendors, researchers, lawmakers, civil society, and advocates have a responsibility to be accountable and responsible for their work and how the work impacts their organizations and society at large. There is a serious need for all these parties to work collectively and collaboratively to keep the bar high for ethical, equitable and responsible development and use of AEDTs and algorithmic systems.

Read a complete version of this article here.

You may also like

Better Together: The difference between MLOps & AI Governance and why you need both to deliver on Responsible AI

At Credo AI, we believe that AI Governance is the missing—and often forgotten—link between MLOps and AI’s success to meet business objectives. In this blog post, we’ll start by defining MLOps and AI Governance, how they differ, and why both are needed for the successful realization of AI/ML projects. Let’s take a closer look at MLOps and AI Governance with respect to scope of work, stakeholder involvement, and development lifecycle.

Fast Company Names Credo AI One of the Next Big Things In Tech

Today, I am thrilled to announce that Credo AI has been named by Fast Company as one of the 2022 Next Big Things in Tech – a prestigious award honoring the most innovative technologies that promise to shape industries, serve as catalysts for further innovation, and drive positive change to society within the next five years.

Operationalizing Responsible AI: How do you “do” AI Governance?

Now that we’ve established what AI governance is and why it’s so important, let’s talk strategy; how does one do AI governance, and what does an effective AI governance program look like? At the highest level, AI governance can be broken down into four components—four distinct steps that make up both a linear and iterative process: 1) Alignment: identifying and articulating the goals of the AI system, 2) Assessment: evaluating the AI system against the aligned goals, 3) Translation: turning the outputs of assessment into meaningful insights, and 4) Mitigation: taking action to prevent failure. Let’s take a deeper look at what happens during each of these steps and how they come together to form a governance process designed to prevent catastrophic failure.

Join the movement to make
Responsible Al a reality