Product Announcement

Credo AI Launches the Most Comprehensive Governance Solution to Support ISO 42001 Adoption

Credo AI is excited to announce the general availability of our ISO/IEC 42001 Policy Pack to operationalize the world’s first certifiable standard for establishing an AI Management System (AIMS)!

April 16, 2024
Author(s)
Ehrik Aldana
Contributor(s)
No items found.
No items found.

A Standard to Ensure and Signal Responsible AI

On December 18, 2023, the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) published ISO/IEC 42001. This globally recognized management system standard provides a comprehensive framework designed to systematically address and control the risks associated with the development and deployment of AI systems.

At its core, ISO/IEC 42001 establishes international foundational practices for organizations to develop AI responsibly and effectively, while promoting public trust in AI systems with a standard that can eventually be certified through a third-party audit.

The role of ISO and IEC in creating best practices across various domains is well-established. This is true in fields like cybersecurity, where ISO/IEC 27001 has set a benchmark for information security management. Similarly, ISO/IEC 42001 is expected to play a similar role for AI governance, allowing organizations to certify and showcase that the development and deployment of their AI systems can be trusted.

AI Governance at the Organization and Use Case Level

A key aspect of adopting ISO/IEC 42001 effectively is understanding its emphasis on both organization-level and use case-level governance. This isn't just about overarching organizational policies but about the governance of each AI instance. By doing so, organizations can ensure they're not merely complying with the standard at a surface level but are deeply embedding its principles into their development and deployment of all their AI systems.

In addition to outlining the structures, policies, and processes that an organization can adopt to manage AI’s unique risks effectively, ISO/IEC 42001 requires the collection of evidence such as technical documentation, user instructions, risk evaluation, and impact assessments for every particular AI use case, product, or service in use by your organization (see more on the importance of having a robust AI Registry).

Prepare for ISO/IEC 42001 Certification with Credo AI

As with other ISO standards, ISO/IEC 42001 will allow accredited certification bodies to audit and certify that an organization’s AI management system conforms with the standard – promoting critical public trust. While additional requirements for certification bodies to confirm conformity with ISO/IEC 42001 are still in development, we anticipate official certification to begin in 2024.

Today, organizations can use the Credo AI Governance Platform and ISO/IEC 42001 Policy Pack to facilitate internal and external gap and readiness assessments to prepare for compliance, giving organizations a crucial head start in the audit lifecycle.
  • AI Registry and Intake: Streamline and centralize the inventory of your organization’s use case to ensure you are tracking key documentation and evidence to ensure compliance with ISO/IEC 42001 and other policies.
  • Policy Packs: With our ISO/IEC 42001 AI Policy Pack, organizations have clear and actionable steps on how to adopt the standard and monitoring compliance. Policy Packs are also available for other AI related laws, regulations, and standards.
  • Intelligent Risk Management: Address ISO 42001’s requirements for establishing risk assessment and treatment processes with the Credo AI Governance Platform’s library of AI-specific risk scenarios and controls.
  • Generate audit-ready reports: Demonstrate your organization's adherence to ISO 42001 and other regulatory frameworks to build and maintain trust.

At Credo AI, we acknowledge the challenges and opportunities that organizations may face when aligning with new standards like ISO/IEC 42001. Our goal is to simplify this process by offering expert guidance and software solutions that help organizations embed Responsible AI principles, policies, and norms into their AI management practices.

Our AI Governance Platform translates the complex requirements of emerging policies and standards (like the NIST AI Risk Management Framework) into actionable steps. With our tools, companies can confidently evaluate their AI systems, ensure they meet necessary requirements, and maintain an audit-ready stance. Through these offerings, we aim to be a partner to organizations as they strive to operate within the evolving landscape of AI governance and public trust.

Global alignment is already happening around core elements of AI risk management and reflected through standards like ISO/IEC 42001. As AI continues to integrate deeply into the operational fabric of organizations, standards become essential.

Schedule a call with our expert team to learn more about how we can help you ensure that your organization is at the forefront of principled and trustworthy AI development and deployment.

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.