AI Governance

Future-Proofing Automated Employment Decision Tool Use to Comply with AI Regulations

Over the past decade, many companies have adopted some form of automation for the hiring process by using what are now called Automated Employment Decision Tools (AEDT).

March 22, 2022
Author(s)
Eddan Katz
Contributor(s)
No items found.
No items found.

Long gone are the days when job seekers pounded the pavement to gain employment. For the benefit of digital natives - this idiom refers to a determination to find work. Showing up and dropping off one’s resumé at an employer’s office was thought to advantage the hopes for an interview. For the sake of everyone else - the expression traces back to before cities had sidewalks.

Today’s employees are quite accustomed to applying for jobs online and many of the automated application processes that go along with it. Searching the web for employment opportunities has resulted in making it much easier for job seekers to proverbially pound the pavement.

However, it has also meant that the volume of applications for specific jobs at individual organizations often becomes practically unmanageable for the human resources department to keep up reviewing each resumé.

The Mainstream Automation of the Hiring Process

Over the past decade, many companies have adopted some form of automation for the hiring process by using what are now called Automated Employment Decision Tools (AEDT). These include software programs that perform: the aggregation of candidates, chatbot interviews, or game-based assessments to name a few. These technological innovations have permeated the recruitment process, as well as many other aspects of employee management.

Though some progress has been made in confronting discrimination in the hiring process in the past few decades - there is still much work to do addressing systemic bias that obstructs more equitable employment opportunities. The hope for companies integrating AEDT into their hiring is that finding the right candidate will be better and more efficient, setting aside those preconceptions people may have that are actually irrelevant to their fitness for the role.

The use of Artificial Intelligence (AI) algorithms in these AEDT has amplified our concerns about bias, due to the many ways in which relying on existing data to train an algorithm increases the likelihood of cementing past discrimination into automated decisions. Even when protected attributes - such as race and gender - are explicitly eliminated, inferences are still drawn and biased correlations persist. Automating the process makes it even more dangerous as the prejudice becomes increasingly invisible.

Global Developments in Regulating AI used for Employment Purposes

The European Union’s Proposal for a Regulation laying down harmonized rules on artificial intelligence is an ambitious legal framework that is fundamentally structured to distinguish between different practical uses of AI based on their respective risks to determine the appropriate level of scrutiny. The European Commission’s draft explicitly identified hiring algorithms as a high-risk use case: 

AI systems used in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons*.

* https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF Pg. 27, Section 36

In the U.S., the Equal Employment Opportunity Commission (EEOC) last year launched an initiative to ensure algorithmic fairness in the use of AI. EEOC Chair Charlotte A. Burrows explains the urgency clarifying the law:

“While the technology may be evolving, anti-discrimination laws still apply. The EEOC will address workplace bias that violates federal civil rights laws regardless of the form it takes, and the agency is committed to helping employers understand how to benefit from these new technologies while also complying with employment laws.”

NYC Automated Employment Decision Tools Regulation

Late last year, the New York City Council passed a bill that prohibits the use of AEDT to screen candidates for hiring or employees for promotion unless employers and employment agencies meet certain requirements. The bill was approved on Nov. 10, 2021, and became law - NYC Admin Code § 20-870 et. seq. - on Dec. 10, 2021. The law soon takes effect on Jan. 1, 2023, and requires:

  • Annual bias audit: An employer cannot use an AEDT unless: (1) the tool undergoes a bias audit for disparate impact before it is used and then annually for every year that it continues to be in use; and (2) a summary of the results of the bias audit are made public on the employer’s website before the tool is used. The bias audit must be conducted within one year before the employer’s use of the tool.
  • Notice to Employee/Candidate: An employer using an AEDT to screen employment candidates must notify candidates who reside in New York City of (1) the use of tool in connection with the assessment of the candidate; (2) the qualifications and characteristics considered by the tool to make such assessment; and (3) the type of data collected by the tool, the source of the data, and the employer’s policy on how long the data is kept.

With this law, the NYC Council has taken a lead in regulating AEDT before any other jurisdiction in the US or Europe. It has given shape to the norms by which other similar government entities at city, state, or federal levels can legally bind employers to have similar controls. It is an important first step towards more meaningful transparency of hiring practices by setting some standards for requiring audit results to be disclosed by employers.

Future-Proofing Compliance for Responsible AI

The law has left several key questions open though, and it remains to be seen how similar regulations in other jurisdictions will go about addressing those ambiguities and legislative limitations. The NYC Admin Code § 20-870 regulations are narrow in their construction regarding use of AI tools in employment-related issues. By focusing solely on hiring and promotion, the law leaves many substantial employment decisions that may be automated unregulated - including those related to compensation, scheduling, and working conditions.

Comparing the first draft of the bill made public by the NYC Council with the final version, the scope of class attributes covered is reduced to disparate impact based on race, ethnicity, and sex. By leaving out explicit mention of discrimination based on disability, age, or sexual orientation - many people who suffer the consequences of bias are still left unprotected and further disadvantaged. Similarly, the final draft restricting its relevance to NYC residents is untenable, given most of the NYC workforce are commuters. 

Another significant change in the final bill version was the shifting the burden of responsibility from the vendors of AEDT to the companies using them. It is the employers who are now responsible for ensuring that a bias audit is conducted and that the results are made public. In the draft version, the vendors were responsible for conducting bias audits as a prerequisite for being allowed to sell their tools.

Credo AI : Ensuring Responsible Hiring using AI

At Credo AI, we have been working on these challenges for years to ensure that AI is always in service to humanity. Our mission is to ensure that an increasingly AI-embedded society will have more equitable access to healthcare, education, and employment - not less. 

Our Responsible AI (RAI) platform provides context-driven governance and risk assessment to ensure compliant, fair, and auditable development and use of AI. We are creating Policy Packs at the cutting edge of industry best practices which bridge our tools with the rules now taking shape.

Credo AI Policy Packs: NYC AEDT Regulations

The most significant shortcoming of the NYC AEDT regulation is the lack of guidance on what is meant by a bias audit. The law specifies neither bias nor audit beyond the requirement for an assessment of disparate impact for sex, race, and ethnicity. The bill also requires employers to post a summary of the bias audit on their website, but gives no outline for what that summary must contain.

The Credo AI: NYC AEDT Policy Pack provides vendors, employers and third parties with a set of bias audit rules which can objectively assess models, datasets, outcomes and processes. We’re looking out for a company’s best interests farther into the future by working directly with policymakers to align on appropriate ecosystem incentives and to anticipate impending regulations.

We are committed to leading Responsible AI Governance with industry-standard best practices not only for compliance and minimizing exposure to liability, but because it’s good for business. Integrating Responsible AI stimulates more productivity due to better data accuracy; and enables collaboration across the organization by making algorithmic transparency accessible to teams and useful for their respective purposes.

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.