AI Compliance

Mastering AI Risks: Building Trustworthy Systems with the NIST AI Risk Management Framework (RMF) 1.0

To support the rapid growth of Artificial Intelligence adoption, the National Institute of Standards and Technology (NIST) spent a significant amount of time gathering stakeholder feedback from both the public and private sectors in order to publish a comprehensive NIST AI Risk Management Framework 1.0 (AI RMF) on January 26th, 2023.

May 4, 2023
Author(s)
Evi Fuelle
Contributor(s)
No items found.

To support the rapid growth of Artificial Intelligence adoption, the National Institute of Standards and Technology (NIST) spent a significant amount of time gathering stakeholder feedback from both the public and private sectors in order to publish a comprehensive NIST AI Risk Management Framework 1.0 (AI RMF) on January 26th, 2023. Two months later (on March 30, 2023), NIST also released a companion AI RMF Playbook for voluntary use – which suggests ways to navigate and use the AI Risk Management Framework (AI RMF) to incorporate trustworthiness considerations in the design, development, deployment, and use of AI systems.

The NIST AI RMF is a voluntary framework that offers guidance on managing and mitigating AI-related risks in a structured, measurable, and flexible manner.

✅ This framework is sector, use case, and law and regulation agnostic, providing organizations with a horizontal foundation for understanding mapping, measuring, managing, and governing AI risks.

"Northrop Grumman is using AI for applications like wayfinding, unmanned vehicles, enhanced target recognition, and many other applications. Credo AI has been helping us stand up for responsible AI governance within our company so that we can create AI according to the highest ethical standards, using a comprehensive contextual AI policy system that can guide the development, deployment, and use of AI. The NIST AI RMF provides us with that foundation of what to do so we can advance AI trustworthiness. Building confidence and trust in AI solutions is crucial right now, and the NIST AI RMF can help us do that."

- Amanda Muller, Ph.D., Chief of Responsible Technology at Northrop Grumman, Credo AI NIST RMF 1.0 Webinar (Feb. 2, 2023)

Operationalizing values to cultivate a culture of proactively preventing AI risks. 

One of the unique features of the NIST AI RMF is its rights-preserving approach to Artificial Intelligence. It outlines a process that goes beyond traditional measures of accuracy, robustness, and reliability to also recognize the socio-technical characteristics of the system, such as privacy, interpretability, safety, and bias. 

By establishing a shared understanding of what constitutes trust, the NIST AI RMF is paving the way for identifying what needs to be measured, providing guidance on how to measure each of these aspects, and helping organizations to navigate the trade-offs in decisions about how safe, private, and accurate an AI system should be.

"If you cannot measure it, you cannot improve it. The AI RMF adopts a rights-preserving approach to AI and outlines a process to address traditional measures of accuracy, robustness, and reliability. The RMF describes trustworthy AI as valid and reliable, safe, fair and unbiased, secure and resilient, accountable and transparent, explainable and interpretable, and privacy-enhanced."

- Elham Tabassi, Chief of Staff in the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST),
Credo AI NIST RMF 1.0 Webinar (Feb. 2023) 

NIST RMF 1.0: Adoption Challenges for Organizations 

At Credo AI, we work with companies of all sizes across industry sectors such as financial services, insurance, and human resources to create Responsible AI governance processes and policies every day, including using policy tools like the NIST AI RMF.

While we applaud the NIST AI RMF and the significant contribution it makes to promoting responsible AI development and use, we also recognize that the success of this framework will also depend in part on its adoption by the industry. 

In our ongoing collaboration with organizations that design and deploy  AI, we have identified some challenges that arise during the implementation of the NIST AI RMF, which we believe require further work to address:  framework.

  1.  Roadmap for Organizational Navigation Personas: For organizations lacking the capacity or expertise to implement every aspect of the NIST AI RMF, determining different personas that could be responsible for various RMF steps, lines of communication, and completion and review pathways would be extremely helpful. This is especially true for small businesses that may not have the in-house resources to fully adopt the framework.

    Clearer definition of roles and responsibilities associated with the NIST AI RMF components will be helpful for organizations trying to figure out who is responsible for this work— different roles and skills are needed to bring the NIST AI RMF to life (tools like the Credo’s Responsible AI Platform are helpful to address this). 
  1. Guidance on Specific Use Cases: Although the NIST AI RMF covers a broad range of AI/ML applications, its implementation for specific use cases requires a significant degree of interpretation.  The NIST AI RMF Playbook is a helpful companion to address this. 
  1. AI Risk Management Expertise: Expertise is crucial in order for companies to adopt the NIST AI RMF successfully. Two main categories of expertise are required, including:
    (1) Domain expertise in the industry of the use case that is to be governed and;
    (2) Domain expertise in Responsible AI itself as a whole.

    At the highest level, the NIST AI RMF establishes a process of defining risks for an AI use case; defining ways to measure those risks; measuring them; and then taking mitigating actions—doing this requires that you know a lot about how to identify, define, and measure AI risks.

    This can be greatly simplified with tailored versions of the NIST AI RMF (in addition to the NIST AI RMF Playbook)  that suggest risks and measurement methodologies based on a specific use case. For example, a credit risk prediction use case may have similar risks across different organizations—giving teams a starting point that is much more specific. 
  1. Example Text and Templates: Additionally, without some type of example text or templates for what program managers could write in response to certain questions in the AI RMF - for example, what constitutes socio technical risks (specific to a particular industry like financial services), it will likely be difficult for a program manager (or another individual who is completing the NIST AI RMF for their company) without experience in sociotechnical AI risks to understand how to accurately and fully answer such questions. 

In light of the three challenges discussed, it is apparent that many companies, particularly small and medium-sized enterprises, would benefit from a guide to help them adopt the NIST AI RMF 1.0. Credo AI is poised to be that guide and is proud to have partnered closely with NIST to ensure that we are equipped to operationalize the AI RMF 1.0 for companies of all sizes and in every stage of Responsible AI design across various industries.

Closing the Gap: Operationalizing the NIST AI RMF with Credo AI

As contributors to the development of the NIST AI Risk Management Framework and participants at its official launch in the U.S. Department of Commerce, Credo AI has built a comprehensive solution to help organizations adopt the framework with ease.

Our Responsible AI Governance Platform now includes Policy Packs that help an organization better understand the requirements of the NIST AI RMF and take their AI Use Cases through the critical steps of “Map,” “Measure,” and “Manage” as defined by the AI RMF 1.0. 

These Policy Packs enable organizations to track their adoption of the NIST AI RMF for each of their AI/ML use cases or applications and help organizations generate and track technical and process evidence, as well as conduct human-in-the-loop reviews at critical stages in the AI development lifecycle, in accordance with the NIST AI RMF.

Conclusion

At Credo AI, our mission is to help organizations design, develop, and deploy AI systems that align with the highest ethical standards. We believe that adopting the NIST AI Risk Management Framework (RMF) is a crucial step in promoting responsible AI practices, and we are fully committed to guiding our customers in their efforts to adopt the framework.

We are eager to continue working with NIST to make the AI RMF 1.0 accessible and straightforward for organizations of all sizes and stages of AI maturity. If you have any questions or concerns, please don't hesitate to reach out to us. We are here to support you in your journey toward responsible and ethical AI practices.

Interested in operationalizing the NIST AI RMF in your organization? Request a demo today!

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.