Evaluating Third Party AI/ML for Legal Risk & Regulatory Compliance in the HR Space
A Fortune 500 financial services company uses Credo AI to evaluate third-party AI/ML tools for risk and compliance. With the help of Credo AI, they identified an out-of-compliance HR vendor tool and immediately took action to ensure its use was aligned with regulatory and legal requirements.
Most large enterprises struggle to keep track of their third-party AI/ML tools used across different teams, from sales to marketing to operations. As AI-driven applications come under greater scrutiny from regulators and the general public, many organizations need to understand whether their AI vendors are exposing them to legal or regulatory risks.
In light of the recent algorithmic hiring law passed in New York City (NYC Local Law No. 144) which goes into effect in January 2023—explicitly requiring that every automated employment decision tool used by employers to make employment decisions in New York City undergo an annual bias audit to check for bias—a leading Fortune 500 financial services company needed a solution to evaluate third-party AI-driven software currently used within its HR department for risk and compliance.
"With zero visibility into legal and regulatory risk exposure from these tools, the organization needed to quickly establish a third-party AI evaluation process and standardized approach to AI vendor risk assessment to evaluate the regulatory compliance of different AI-driven HR tools."
Beginning with New York City’s algorithmic hiring law, the financial services provider used Credo AI's out-of-the-box Policy Pack with a standardized report template for the legally-required bias audits. Their team was able to send evidence requests to each of their HR vendors in just a few clicks, and all vendors could upload their documentation and technical assessment results for approval. With this systematic approach, the financial services company's AI risk and compliance team was able to see the evidence provided by vendors in a centralized evidence store for each vendor solution.
With Credo AI, the AI Risk and Compliance team conducted evidence reviews and approvals (or rejections) in a streamlined way and was able to automatically generate formatted reports at the end of the compliance review. Notably, the reports are all designed for non-technical stakeholders, enabling HR and compliance professionals to understand each tool's technical bias assessment results in the context of ethical and legal considerations.
With Credo AI, the AI Risk and Compliance team was able to identify an “at-risk” AI vendor solution in HR. Credo AI made it easy for the data science team to identify and address at-risk use cases. Thanks to the Responsible AI Platform’s Policy Pack, the AI Risk and Compliance team was able to quickly identify the specific pieces of information received from the vendor that was insufficient to prove compliance with the law and was able to conduct its own tests on the tool to produce additional documentation required by the law.
The team is now using Credo AI to assess more third-party AI tools across different teams and departments and confident Credo AI will have all the tools they need to uncover insights to address compliance issues, as well as produce the reports that will help them showcase to regulators and policymakers their best efforts to get into compliance.