Local Law No. 144

Local Law No. 144: NYC Employers & Vendors Prepare for AI Bias Audit with Credo AI’s Responsible AI Governance Platform

Catharina Doria
Marketing Manager
February 8, 2023
2/8/2023
Contributor(s):
Amin Rasekh

The clock is ticking! With New York City Local Law No. 144’s (LL-144) enforcement deadline fast approaching (April 15th, 2023), companies are scrambling to ensure they comply with the new AI regulation. While some organizations are still unsure how to start their journey, others—like AdeptID—have already taken the lead to demonstrate their commitment to Responsible AI practices. In this blog post, we will briefly describe Local Law No. 144, share how Credo AI is supporting HR Employers and Vendors, and showcase how we have supported AdeptID in their efforts to adhere to the legal requirements established by LL-144.

AdeptID: Local Law No. 144 Compliance with Credo AI

In December 2021, the New York City Council passed Local Law No. 144 (LL-144), mandating that AI and algorithm-based technologies for recruiting, hiring, or promotion be audited for bias before being used. With an enforcement date of April 15, 2023, non-compliance carries severe penalties, including fines of up to $1,500 per violation, as well as possible—and most probable–damage to a company’s reputation. 

LL-144 reflects a broader trend: the increase in regulation of the use of automated employment decision tools (AEDTs) at the local, state, federal, and international levels. In addition, New York City, Colorado, Maryland, and Vermont have passed state-level legislation requiring some form of transparency related to AI and AEDTs. These include inventories and reviews that detail whether an automated decision system has been tested for bias by an independent third party, has a known bias, or is untested for bias. For more insight into Local Law No. 144, consult our latest blog post on 7 Things You Need to Know About the Updated NYC Algorithmic Hiring Law.

While LL-144 outlines requirements for the fair and transparent use of AI in New York City, its specific application for different scenarios may be confusing or lacking in detail. As a result, determining its applicability to specific use cases can prove to be a complex task.

Organizations of all sizes are now seeking guidance and tools to ensure compliance with the law by April 15, 2023. 

AdeptID, a developer of machine-learning-powered talent-matching software to support hidden talent in the workforce, was one of them. In an effort to ensure fairness in their AI models, the company embarked on a search for a partner to assist with its auditing and responsible design. After a comprehensive analysis of the industry and an understanding of the importance of adhering to Local Law No. 144, they have decided to partner with Credo AI to ensure full compliance.



“We started AdeptID because we see huge potential for AI to identify hidden talent, and help millions find better jobs, faster. HR and Talent applications need more AI, not less.
But the AI has to be built and used responsibly - not just once a year but continuously. In Credo AI, we’ve found kindred spirits who believe AI can be used for good, if it’s used responsibly.”
- Fernando Rodriguez-Villa, CEO of AdeptID.

Operationalizing Local Law No. 144 with Credo AI’s Responsible AI Governance Platform

Since its founding in 2020, Credo AI has made it its mission to provide employers and vendors with the necessary software tools and expertise to implement Responsible AI at scale, taking on the challenge of ensuring that organizations can confidently comply with LL-144 and other regulations surrounding the responsible development and use of AI.

Through our Responsible AI Governance Platform (the “Platform”), Credo AI is providing organizations with a comprehensive solution for AI Governance and enabling them to fulfill their obligations, not only for U.S. local and state legislation like NYC LL-144 or Colorado’s S.B. 169 but also for the European Union AI Act, the U.S. Department of Commerce NIST AI Risk Management Framework and other emerging standards/regulations. 

Our Platform supports organizations in defining context-driven governance requirements for AI systems by conducting technical assessments of data and models, generating governance artifacts, and providing human-in-the-loop reviews and AI Governance process tracking.

Additionally, our Platform assesses models, datasets, and AI uses cases against modular requirements from laws, regulations, standards, and internal organizational policies. These requirements are encoded into platform components called Credo AI Policy Packs. 

One of our Policy Packs—focused on LL-144—translates broad and abstract principles described in the law into actionable requirements and system assessments, helping organizations understand their obligations under regulations more effectively, test & evaluate AI systems, and implement the necessary processes to ensure continuous compliance. 

Given AdeptID’s scope of work, our solution was to develop an out-of-the-box Policy Pack to operationalize their specific needs.

Assisting AdeptID in facilitating compliance for its customers with Local Law No. 144

To support their customers in being compliant with Local Law No. 144, AdeptID used Credo AI’s Platform to perform a bias assessment for a tool that helps identify high-potential candidates for apprenticeship-based training and employment. Additionally, Credo AI performed a third-party review of the assessment report generated by AdeptID using our Responsible AI Governance Platform.

This was our 5-step approach for operationalizing LL-144 using our Responsible AI Governance Platform:

1. Context-based Approach:

Every customer has distinct needs, and adhering to regulations, like LL-144, demands bespoke solutions to fulfill the specific demands of each organization. This can pose a challenge, but with the assistance of Credo AI’s expert Policy Team, compliance becomes a manageable task.

After Credo AI’s Policy Team’s thorough analysis, we concluded that LL-144 applies to AdeptID’s use case and that they need to undergo an annual bias audit. 

2. Principles to Practice:

Responsible AI regulations, or in this case, LL-144, may be unclear or lack specificities. Credo AI can support organizations by translating regulations and best practices into clear and actionable steps, ensuring organizations can comply with specific best practices.

AdeptID, with their forward-thinking approach to bias in their ML models, sought to go above and beyond the requirements set forth by law and implement their own unique approach to bias. In partnership with our team, they were able to create a custom Policy Pack that not only met all legal requirements but also incorporated their own internal measures for bias, while also presenting information in a way that aligned with their customer’s perspectives.

3. Quantitative Assessment:

LL-144 requires a quantitative assessment involving the assessment of fairness metrics such as impact ratio and intersectionality analysis. Organizations can get their full assessment by utilizing Credo AI Lens™, our open-source framework for Responsible AI assessments. 

In this step, AdeptID data scientists used Credo AI Lens to assess their AI models for performance and fairness with the technical requirements specified.

4. Transparency Reporting:

LL-144 requires organizations to publicly report their use of Artificial Intelligence and compliance with the regulations, which can be a complex and time-consuming task. With our Responsible AI Governance Platform reporting features, organizations can generate governance artifacts like transparency reports, model cards, disclosure reports, and audit artifacts with ease and standardization. Additionally, our Platform allows for the customization of reports to meet the needs of different stakeholders—from customers to auditors, regulators to internal executives. Through governance reports, organizations can demonstrate their commitment to Responsible AI, showcase their competitive advantage, and earn trust by going above and beyond the requirements of LL-144.

Credo AI Platform Report enabled AdeptID to generate a standardized LL-144 report with the custom add-on bias results they wanted to include in addition to the legal requirements.

5. Continuous Governance:

The Responsible AI regulatory landscape, including LL-144, is constantly evolving, with new developments and proposed rulemaking released regularly. This can make it challenging for organizations to stay up-to-date and ensure compliance with the latest regulations.

To address these challenges, our Credo AI Policy Team ensures that up-to-date policy intelligence is always available via Policy Packs, which we maintain and update regularly. The Responsible AI Platform ensures that organizations regularly evaluate, assess, monitor, and audit their AI systems based on the latest developments in the AI regulatory landscape.

Given the importance of continuous Governance, our Policy Team updated the AdeptID Policy Pack with new clarifications and adjustments to LL-144 released in December of 2022 to ensure full compliance with the latest legal adjustments.

6. Bonus! Responsible AI Review:

Aside from our platform assessing organizations’ systems for LL-144 compliance, our human review of the assessment report identifies assessment gaps and opportunities that help increase its reliability and provide additional assurance to stakeholders.

Credo AI’s expertise and experience in Responsible AI have led us to offer a third-party review of assessment reports, adding an extra layer of reliability for AdeptID. The review by our team provided AdeptID with insights and recommendations for bias mitigation and improved compliance. 

Conclusion

As organizations embark on their journey towards compliance by April 15, 2023, meeting necessary standards and ensuring responsible operations cannot be overstated. In this fast-paced race to stay on top, organizations like AdeptID are setting an inspiring example with their commitment to Responsible AI practices and compliance. At the forefront of this industry, AdeptID is leading the way and paving the path for others to follow. If you are seeking guidance on how to implement NYC Local Law No. 144, our team is ready to assist. Reach out to us at demo@credo.ai and request a demo today

You may also like

Tools for Transparency: What Makes a Trustworthy AI System? | MozFest 2023

In recent years, the development and use of artificial intelligence (AI) systems have skyrocketed, leading to an urgent need for accountability and transparency in the industry. To shed light on this topic, Ehrik Aldana, Tech Policy Product Manager at Credo AI, was invited to give a lightning talk at Mozilla Fest 2023.

NYC AI Bias Audit: 7 Things You Need to Know About the Updated NYC Algorithmic Hiring Law

The clock is ticking, and we at Credo AI are committed to helping organizations understand what the recent updates and key aspects of the new regulation mean for them. In today’s blog post, we will outline seven things employers and employment agencies need to know about the requirements for NYC AI Bias Audits, also known as NYC Local Law No. 144.

Credo AI Lens™: the ultimate open-source framework for Responsible AI assessments

While ML capabilities have developed at a staggering pace, guidelines and processes to mitigate ML risks have lagged behind. That’s where AI Governance comes in, which defines the policies and processes needed to safeguard AI systems. While there are many components to successfully operationalize the responsible development of AI systems, a chief need is assessing AI systems to evaluate whether they behave adequately for their intended purpose. This assessment challenge is central to Credo AI’s efforts to develop AI Governance tools. In this blog, we introduce Credo AI Lens, our open-source assessment framework built to support the assessment needs of your AI Governance process.

Join the movement to make
Responsible Al a reality