Credo AI Product Update: Build Trust in Your AI with New Transparency Reports & Disclosures
Today, we’re excited to announce the release of a major update to the Responsible AI Platform focused on Responsible AI transparency reports and disclosures. These new capabilities are designed to help companies standardize and streamline the assessment of their AI/ML systems for Responsible AI issues like fairness and bias, explainability, robustness, security, and privacy, and automatically produce relevant reports and disclosures to meet new organizational, regulatory and legal requirements and customer demands for transparency.
We have built this feature set to make Responsible AI reporting as easy as possible for our customers because we believe that transparency is the first step of meaningful AI governance and accountability. And we’re not alone—the last year has seen tremendous activity in the policy ecosystem regarding Responsible AI disclosures and reporting. Emerging AI-focused regulations are increasingly requiring reports and disclosures from companies that are building and using AI, from the bias audit reports required by New York City’s algorithmic hiring law to the conformity assessments required by the EU AI Act. The public, too, is demanding more accountability and transparency into how AI systems work, as they become more aware of the risks associated with algorithmic systems.
We believe transparency reporting and disclosures must go beyond simply reporting on risk - they must also build trust with the people impacted by the AI systems. Organizations are facing a crisis of trust when it comes to AI—if they don’t earn the trust of key stakeholders, their AI investments may never take off.
Many of our customers understand the urgency when it comes to Responsible AI reporting, whether they’re driven by regulation or customer demand, but they have struggled with operationalizing Responsible AI through reporting at scale—until now. Between translating legal or organizational requirements into actionable assessment criteria for technical teams and turning technical documentation into meaningful understandable artifacts for non-technical stakeholders, generating a simple compliance report took some of our customers’ data science teams weeks or even months.
Our new reporting capabilities include key features to reduce the burden of governance and standardize reporting across the entire organization:
- Policy Packs encode reporting requirements from laws, regulations, standards, guidelines, and internal company processes or policies into standardized templates, with clear instructions for AI/ML development teams to produce any required technical evidence or documentation.
- Automated Report Generation that translates technical data about your models and your datasets into insights about risk and compliance, with report templates tailored to your specific audience—whether it’s internal stakeholders, external customers, or regulators.
- Integration with Credo AI Lens allows your technical teams to run technical model and dataset assessments programmatically, via their notebook environment or CI/CD pipelines, without writing any lines of code.
- Reviews, Approvals, and Attestations make it easy to ensure that every single RAI artifact your organization produces through its governance process is accurate and true, reviewed by informed multidisciplinary stakeholders and is logged to meet your internal audit requirements.
We are very excited to support our customers on their Responsible AI journey—and with this new release, will help enable the broader ecosystem to further their work and establish standards for Responsible AI transparency reports and disclosures.
If you’re interested in learning more, please reach out to email@example.com.
Stay in the loop
Subscribe to our blog and get the latest posts delivered right to your inbox