AI Compliance

ISO/IEC 42001: Promote Trust through Implementing Standards for Responsible AI

On December 18, 2023, ISO and the IEC published ISO/IEC 42001. This globally recognized management system standard provides a comprehensive framework designed to systematically address and control the risks associated with the development and deployment of AI systems. Take a closer look at this new standard and how your organization can certify and showcase that the development and deployment of their AI systems can be trusted.

December 21, 2023
Author(s)
Ehrik Aldana
Contributor(s)
No items found.

Introduction: Background on ISO/IEC 42001

As an eventful 2023 comes to a close, the AI landscape continues to evolve at an unprecedented pace. On December 18, 2023, the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) published ISO/IEC 42001. This globally recognized management system standard provides a comprehensive framework designed to systematically address and control the risks associated with the development and deployment of AI systems.

At its core, ISO/IEC 42001 establishes international foundational practices for organizations to develop AI responsibly and effectively, while promoting public trust in AI systems with a certifiable standard.

A Standard to Ensure and Signal Responsible AI

The role of ISO and IEC in creating best practices across various domains is well-established. This is true in fields like cybersecurity, where ISO/IEC 27001 has set a benchmark for information security management. Similarly, ISO 42001 is expected to play a similar role for AI governance, allowing organizations to certify and showcase that the development and deployment of their AI systems can be trusted.

While the standard is voluntary, it's important to consider how the introduction of ISO/IEC 42001 will impact and be reflected in ongoing regulatory initiatives regarding AI, such as the EU AI Act (which reached political agreement on December 8, 2024). As regulations and compliance needs evolve, ISO/IEC 42001 will become instrumental as an interoperable standard which certifies that AI systems, especially those categorized as high-risk, meet safety and ethical norms.

A Closer Look at ISO/IEC 42001

ISO/IEC 42001 mandates the creation of an AI Management System, which is a framework including the structures, policies, and processes required to manage AI’s unique risks effectively. The standard provides clear requirements and offers guidance for establishing, implementing, and continuously improving an AI management system.

This standard is designed to be adaptable across different facets of artificial intelligence and is applicable to an array of organizational contexts, providing a clear and actionable approach to AI risk assessment and management.

As with other ISO standards, ISO/IEC 42001 will allow accredited certification bodies to audit and certify that an organization’s AI management system conforms with the standard – promoting critical public trust. While additional requirements for certification bodies to confirm conformity with ISO/IEC 42001 are still in development, we anticipate official certification to begin in 2024.

Operationalize Standards with Credo AI

At Credo AI, we acknowledge the challenges and opportunities that organizations may face when aligning with new standards like ISO/IEC 42001. Our goal is to simplify this process by offering expert guidance and software solutions that help organizations embed Responsible AI principles, policies, and norms into their AI management practices.

Our AI Governance Platform translates the complex requirements of emerging policies and standards (like the NIST AI Risk Management Framework) into actionable steps. With our tools, companies can confidently evaluate their AI systems, ensure they meet necessary requirements, and maintain an audit-ready stance. Through these offerings, we aim to be a partner to organizations as they strive to operate within the evolving landscape of AI governance and public trust.

Global alignment is already happening around core elements of AI risk management and reflected through standards like ISO 42001. As AI continues to integrate deeply into the operational fabric of organizations, standards become essential.

Book a demo to learn more about how we can facilitate this transition, ensuring that your organization is at the forefront of principled and trustworthy AI development and deployment.

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.