Product

Credo AI Product Update: Build Trust in Your AI with New Transparency Reports & Disclosures

Susannah Shattuck
Head of Product
November 3, 2022
11/3/2022
Contributor(s):
No items found.
Credo AI Product Update Screenshot

Today, we’re excited to announce the release of a major update to the Responsible AI Platform focused on Responsible AI transparency reports and disclosures. These new capabilities are designed to help companies standardize and streamline the assessment of their AI/ML systems for Responsible AI issues like fairness and bias, explainability, robustness, security, and privacy, and automatically produce relevant reports and disclosures to meet new organizational, regulatory and legal requirements and customer demands for transparency.

We have built this feature set to make Responsible AI reporting as easy as possible for our customers because we believe that transparency is the first step of meaningful AI governance and accountability. And we’re not alone—the last year has seen tremendous activity in the policy ecosystem regarding Responsible AI disclosures and reporting. Emerging AI-focused regulations are increasingly requiring reports and disclosures from companies that are building and using AI, from the bias audit reports required by New York City’s algorithmic hiring law to the conformity assessments required by the EU AI Act. The public, too, is demanding more accountability and transparency into how AI systems work, as they become more aware of the risks associated with algorithmic systems.

We believe transparency reporting and disclosures must go beyond simply reporting on risk - they must also build trust with the people impacted by the AI systems. Organizations are facing a crisis of trust when it comes to AI—if they don’t earn the trust of key stakeholders, their AI investments may never take off.

Many of our customers understand the urgency when it comes to Responsible AI reporting, whether they’re driven by regulation or customer demand, but they have struggled with operationalizing Responsible AI  through reporting at scale—until now. Between translating legal or organizational requirements into actionable assessment criteria for technical teams and turning technical documentation into meaningful understandable artifacts for non-technical stakeholders, generating a simple compliance report took some of our customers’ data science teams weeks or even months.

Our new reporting capabilities include key features to reduce the burden of governance and standardize reporting across the entire organization:

  • Policy Packs encode reporting requirements from laws, regulations, standards, guidelines, and internal company processes or policies into standardized templates, with clear instructions for AI/ML development teams to produce any required technical evidence or documentation.
  • Automated Report Generation that translates technical data about your models and your datasets into insights about risk and compliance, with report templates tailored to your specific audience—whether it’s internal stakeholders, external customers, or regulators.
  • Integration with Credo AI Lens allows your technical teams to run technical model and dataset assessments programmatically, via their notebook environment or CI/CD pipelines, without writing any lines of code.
  • Reviews, Approvals, and Attestations make it easy to ensure that every single RAI artifact your organization produces through its governance process is accurate and true, reviewed by informed multidisciplinary stakeholders and is logged to meet your internal audit requirements.

We are very excited to support our customers on their Responsible AI journey—and with this new release, will help enable the broader ecosystem to further their work and establish standards for Responsible AI transparency reports and disclosures.

If you’re interested in learning more, please reach out to demo@credo.ai.

You may also like

Better Together: The difference between MLOps & AI Governance and why you need both to deliver on Responsible AI

At Credo AI, we believe that AI Governance is the missing—and often forgotten—link between MLOps and AI’s success to meet business objectives. In this blog post, we’ll start by defining MLOps and AI Governance, how they differ, and why both are needed for the successful realization of AI/ML projects. Let’s take a closer look at MLOps and AI Governance with respect to scope of work, stakeholder involvement, and development lifecycle.

Fast Company Names Credo AI One of the Next Big Things In Tech

Today, I am thrilled to announce that Credo AI has been named by Fast Company as one of the 2022 Next Big Things in Tech – a prestigious award honoring the most innovative technologies that promise to shape industries, serve as catalysts for further innovation, and drive positive change to society within the next five years.

Operationalizing Responsible AI: How do you “do” AI Governance?

Now that we’ve established what AI governance is and why it’s so important, let’s talk strategy; how does one do AI governance, and what does an effective AI governance program look like? At the highest level, AI governance can be broken down into four components—four distinct steps that make up both a linear and iterative process: 1) Alignment: identifying and articulating the goals of the AI system, 2) Assessment: evaluating the AI system against the aligned goals, 3) Translation: turning the outputs of assessment into meaningful insights, and 4) Mitigation: taking action to prevent failure. Let’s take a deeper look at what happens during each of these steps and how they come together to form a governance process designed to prevent catastrophic failure.

Join the movement to make
Responsible Al a reality