AI Governance

AI Governance 101 - AI Governance Starts with a Registry

August 3, 2023

The AI technology landscape is expanding so rapidly, and affecting so many departments, many data and privacy leaders are asking: how do we even begin to govern AI? The answer, according to many best practices frameworks and regulatory guidelines, is to start with a registry or ledger to gain oversight of AI initiatives at your company.

Credo AI has recently introduced AI Registry to enable this oversight, so a person of any technical experience can take the first step to implementing AI governance at their company. Join this webinar to understand where to begin with AI governance, and what full AI governance maturity looks like.

• Step 1: Register AI SystemsMaintain a repository for AI you’re building, buying and using; identify risks contextually

• Step 2: Apply Risk-Based  ControlsDefine AI system requirements based on deployment context— like laws, regulations, and standards

• Step 3: Gather & Evaluate EvidenceThe Credo AI Platform takes evidence from your AI infrastructure and documentation about your AI systems to validate if controls are met

• Step 4: Define MitigationsIf the AI System isn’t meeting all requirements, define and assign mitigations to relevant stakeholders

• Step 5: Track Changes in System ComplianceCredo AI connects to your monitoring tools to analyze ongoing system use and identify noncompliant behavior

Join us to learn these best practices for implementing AI governance — and bring a notepad!

SPEAKERS
Susannah Shattuck
Head of Product
Lucas Stewart
Sr. Product Marketing
SPEAKERS
Susannah Shattuck
Head of Product
Lucas Stewart
Sr. Product Marketing

Register Now

You may also like

AI Governance 101
webinar

How NIST Pioneered GenAI Controls—and How to Operationalize Them

Chances are, you’ve felt the expanding mandate for AI usage at your company, with GenAI being embedded in every department and function. But unapproved usage or "shadow AI" is skyrocketing, with over 50% of employees using unapproved generative AI tools at work, according to a Salesforce study. On 29 April 2024, the National Institute of Standards and Technology (NIST) released its initial public draft of the AI Risk Management Profile for GenAI, which defines a group of risks that are novel to or exacerbated by the use of Generative AI (GenAI), and provides a set of actions to help organizations manage these risks at the use case level to power scalable, safe GenAI adoption.The trailblazing new draft AI RMF GenAI Profile was developed over the past twelve months and drew on input from the NIST generative AI public working group of more than 2,500 members, of which Credo AI is a member, as well as the prior work on NIST’s overarching AI Risk Management Framework.Credo AI is excited to present this webinar, explaining these newly defined GenAI risks and controls, as well as how to approach comprehensive AI governance and risk management for enterprises of all sizes that want to safely deploy GenAI tools. Watch this webinar to learn:• An overview of newly published GenAI governance documents, with a deep dive into NIST AI 600-1• How to apply GenAI controls to high-risk AI scenarios: high-risk AI industries and use case examples• Contextual AI governance: why you should apply controls, and manage AI risk, at the use-case level