Most large enterprises struggle to keep track of their third-party AI/ML tools used across different teams, from sales to marketing to operations. As AI-driven applications come under greater scrutiny from regulators and the general public, many organizations need to understand whether their AI vendors are exposing them to legal or regulatory risks.
In light of the recent algorithmic hiring law passed in New York City (NYC Local Law No. 144) which goes into effect in January 2023—explicitly requiring that every automated employment decision tool used by employers to make employment decisions in New York City undergo an annual bias audit to check for bias—a leading Fortune 500 financial services company needed a solution to evaluate third-party AI-driven software currently used within its HR department for risk and compliance.
"With zero visibility into legal and regulatory risk exposure from these tools, the organization needed to quickly establish a third-party AI evaluation process and standardized approach to AI vendor risk assessment to evaluate the regulatory compliance of different AI-driven HR tools."