AI Compliance

Evaluating Third Party AI/ML for Legal Risk & Regulatory Compliance in the HR Space

A Fortune 500 financial services company uses Credo AI to evaluate third-party AI/ML tools for risk and compliance. With the help of Credo AI, they identified an out-of-compliance HR vendor tool and immediately took action to ensure its use was aligned with regulatory and legal requirements.
Header image


Most large enterprises struggle to keep track of their third-party AI/ML tools used across different teams, from sales to marketing to operations. As AI-driven applications come under greater scrutiny from regulators and the general public, many organizations need to understand whether their AI vendors are exposing them to legal or regulatory risks.

In light of the recent algorithmic hiring law passed in New York City (NYC Local Law No. 144) which goes into effect in January 2023⁠—explicitly requiring that every automated employment decision tool used by employers to make employment decisions in New York City undergo an annual bias audit⁠ to check for bias—a leading Fortune 500 financial services company needed a solution to evaluate third-party AI-driven software currently used within its HR department for risk and compliance.

"With zero visibility into legal and regulatory risk exposure from these tools, the organization needed to quickly establish a third-party AI evaluation process and standardized approach to AI vendor risk assessment to evaluate the regulatory compliance of different AI-driven HR tools."


Beginning with New York City’s algorithmic hiring law, the financial services provider used Credo AI's out-of-the-box Policy Pack with a standardized report template for the legally-required bias audits. Their team was able to send evidence requests to each of their HR vendors in just a few clicks, and all vendors could upload their documentation and technical assessment results for approval. With this systematic approach, the financial services company's AI risk and compliance team was able to see the evidence provided by vendors in a centralized evidence store for each vendor solution.


With Credo AI, the AI Risk and Compliance team conducted evidence reviews and approvals (or rejections) in a streamlined way and was able to automatically generate formatted reports at the end of the compliance review. Notably, the reports are all designed for non-technical stakeholders, enabling HR and compliance professionals to understand each tool's technical bias assessment results in the context of ethical and legal considerations.

Join the movement to make Responsible AI Reality




No items found.