AI Governance

Navigating High-Risk AI: Ensuring AI Governance in HR, Healthcare, FinServ, and Insurance

This blog highlights when your use case might constitute a High-Risk AI application, underscores the importance of AI governance in managing these cases, and explains how Credo AI can provide robust governance solutions for your High-Risk AI initiatives.

January 23, 2024
Author(s)
Ehrik Aldana
Contributor(s)
Lucía Gamboa

In the rapidly expanding world of artificial intelligence, certain industries and use cases can be categorized as 'High risk.'

This classification isn’t just jargon; it explicitly acknowledges that the stakes with certain AI applications are significantly higher due to their profound impact on human lives. High-risk AI use cases warrant stricter oversight and rigorous governance, particularly in healthcare, finance, human resources, and insurance. 

This blog highlights when your use case might constitute a High-Risk AI application, underscores the importance of AI governance in managing these cases, and explains how Credo AI can provide robust governance solutions for your High-Risk AI initiatives.

What are High-Risk AI Applications?

When we discuss High-Risk AI applications, we're referring to AI systems whose outcomes can deeply affect individuals’ livelihood, health, and access to essential services. 

In fact, many emerging legal frameworks are paying close attention to these types of use cases. For example, the European Union’s draft AI Act and the U.S. Government’s AI guidance following President Biden’s Executive Order on AI detail more stringent requirements for “High Risk” and “Rights or Safety Impacting” use cases, respectively. Moreover, industries that are already heavily regulated due to their impact on public safety and welfare are being subject to additional regulatory requirements specific to AI.

Some of the industries where High-Risk AI use cases are prevalent include:

Healthcare

In healthcare, AI technologies promise to revolutionize patient diagnosis, treatment options, and care management systems. But with such advancements comes the responsibility to meticulously govern these AI systems to prevent dire outcomes that could result from erroneous decisions. 

Healthcare organizations must already adhere to stringent regulations, such as HIPAA in the United States, which safeguard patient data and ensure the confidentiality and integrity of medical services. Government agencies like the Food and Drug Administration and the Department of Health and Human Services (HHS) have also released draft guidance in areas such as the use of health data in algorithms and AI/ML in medical devices. Furthermore, per the Executive Order 14110, by April 27, 2024 , HHS is tasked to develop a strategy to evaluate whether healthcare AI-enabled technologies maintain sufficient levels of quality and performance for AI . This includes developing an AI assurance policy and infrastructure to assess algorithm performance against real-world data pre- and post-deployment.

We expect guidelines and regulations to be finalized and new ones to continue to emerge for healthcare enterprises with AI use cases. Hence, enterprises operating in healthcare and employing AI/ML should initiate their AI governance efforts today to stay ahead of the impending surge in laws and regulations.

Financial Services

In the world of financial services, AI systems have increasingly been utilized for credit scoring and decision-making processes. These AI systems, which are designed to make consequential predictions like creditworthiness, can tap into vast amounts of consumer data, expanding beyond traditional credit reports to include data harvested from consumer behaviors and even social media profiles. But with this innovation comes heightened risks and, with it, an increased need for responsible governance practices and oversight. 

Now, when integrating AI into financial services decisions, creditors need to explain their lending decisions with precision, a mandate reinforced by the Consumer Financial Protection Bureau (CFPB). The CFPB's recent guidance underscores a clear directive: lenders using AI must provide specific and accurate reasons for adverse credit decisions, leaving no room for vague explanations or lack of explainability in AI systems.

Organizations utilizing AI/ML systems in Financial Services should embark on their AI governance journey today to ensure compliance with existing mandates and to prepare for forthcoming regulations.

Human Resources

Human Resources (HR) is another domain experiencing rapid transformation due to AI. Recruitment, talent acquisition, and employee management have all begun harnessing AI to make more informed decisions. However, this technological uptake is not without regulatory guardrails, as evidenced by NYC Local Law No. 144 (LL-144)

The law, which enforcement started in July 2023, mandates that AI technologies used in hiring and promotion be systematically audited for biases before being employed, with non-compliance attracting fines of up to $1,500 per violation and potential reputational damage. The law emphasizes the need to carefully monitor and govern the use of AI systems to ensure fairness and eliminate discrimination. 

Prepare for AEDT Bias Audit: Credo AI has already helped both employers and vendors prepare for the requirements of Local Law 144. Learn more about how Credo AI can support your organization to comply by requesting a demo today!

Insurance

The insurance industry, which is no stranger to predictive modeling, has increasingly adopted AI to assess risk and tailor policies. Yet, as these models become more sophisticated, they also raise concerns about fairness and discrimination.

The Executive Order 14110 encourages the Consumer Financial Protection Bureau (CFPB) to use its authority to ensure compliance with Federal law. As a regulated sector, insurance companies may face increased scrutiny in their underwriting processes with respect to housing insurance, and regulators are encouraged to consider rulemaking for current third-party AI services used.

Colorado's SB21-169, the leading legislation aimed at preventing unfair discrimination in insurance practices, explicitly addresses these concerns by prohibiting the use of external consumer data and information sources (ECDIS), algorithms, and predictive models. Specifically, life insurers underwriting must be prepared to prove they are not using external consumer data and information sources in a manner that leads to "unfair discrimination" against individuals based on protected characteristics. 

This legislation acknowledges the potential for algorithmic bias to perpetuate systemic inequalities and mandates the insurance industry to operate with the utmost integrity and fairness. The law's implementation requires insurers to carefully evaluate their underwriting processes and make the necessary adjustments to comply with the non-discrimination requirements.

Prepare for Colorado SB21-169: By December 1, 2024, all life insurers authorized to do business in the state of Colorado that are using external consumer data and information sources (ECDIS) as well as algorithms and predictive models relying on ECDIS, must be prepared to prove that they are not using either in a manner that leads to “unfair discrimination” of customers based on protected characteristics. Schedule a call with us and learn how Credo AI can help you with compliance. 

AI Guardrails: A Trend That's Just Beginning

Each of these high-risk industries deals with decisions or outcomes that directly affect individual well-being, making their responsible deployment critical. Laws and regulations like NYC Local Law 144, Colorado SB 21-169, and the EU AI Act are early but vital efforts to reign in these risks by mandating transparency, accountability, and compliance. 

And most importantly, they show a clear trend: the beginning of broader legal frameworks that will demand comprehensive AI governance strategies. 

It goes without saying: with the rise of regulations and standards, enterprises and organizations all around the globe should be gearing up for the important task of implementing AI governance—especially those in highly regulated sectors like healthcare, financial services, human resources, and insurance. 

How Credo AI Can Help

Credo AI recognizes the necessity of aligning High-Risk AI applications with ethical standards and regulations—and we provide the tools, expertise, and guidance to achieve this. By using Credo AI's services, organizations can:

  • Assess and Monitor Risk: Identify potential ethical and compliance risks inherent in your AI systems with our risk assessment tools.
  • Ensure Transparency: Provide stakeholders and regulators with the necessary documentation and information regarding your AI systems’ development and decision-making processes.
  • Enhance Accountability: Our governance framework allows for clear assignment and tracking of AI system responsibilities, ensuring accountability throughout your AI's lifecycle.
  • Demonstrate Compliance: Navigate the complex regulatory landscape with guidance and affirmation that your AI systems are compliant with existing and emerging laws and standards.
  • Implement Ethics by Design: We facilitate the integration of ethical considerations into your AI systems from inception, fostering trust and reliability in your applications.

Conclusion

AI brings the transformative potential to industries with significant impacts on safety and welfare, and with such power comes great responsibility. Organizations harnessing AI in High-Risk scenarios need to be safeguarded by a robust governance framework that ensures their technological advancements are safe and equitable. At Credo AI, we understand this imperative and are poised to help businesses navigate the complexities of High-Risk AI so that innovation does not come at the expense of ethical integrity or regulatory non-compliance. If you are working in the nexus of AI and High-Risk use cases in HR, Financial Services, Pharma, or Insurance, partner with us to start your responsible AI journey.

Schedule a call with us today and get started with your AI governance journey!

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.