Credo AI Lens™: the ultimate open-source framework for Responsible AI assessments
While ML capabilities have developed at a staggering pace, guidelines and processes to mitigate ML risks have lagged behind. That’s where AI Governance comes in, which defines the policies and processes needed to safeguard AI systems. While there are many components to successfully operationalize the responsible development of AI systems, a chief need is assessing AI systems to evaluate whether they behave adequately for their intended purpose. This assessment challenge is central to Credo AI’s efforts to develop AI Governance tools. In this blog, we introduce Credo AI Lens, our open-source assessment framework built to support the assessment needs of your AI Governance process.