AI Governance 101

Streamlining AI Governance: AI stakeholders and the jobs to be done

February 29, 2024

In 2023, the world woke up to the realities of deploying AI, with countless public pitfalls that helped foster a wave of regulatory guardrails to ensure AI’s huge impact on humanity is positive. As enterprises scramble to adopt AI quickly yet safely, various models of AI governance abound. But all of them involve multidisciplinary involvement across stakeholders like data science, legal, and business teams, as well as extensive documentation.

This month, we’re introducing our auto-magic features powered by GenAI to assist with the heavier lifting involved in AI governance documentation and categorization. AI governance should be as painless as possible, so AI stakeholders can focus on the entire point of AI: elevating business processes and reaching untold heights for their organization.

Attend this webinar to learn:

• How to build the ideal AI Governance Team in your organization

• How an AI Governance Team works together, and the jobs to be done

• Where generative AI can help in AI governance (use case intake; documentation; risk categorization)

SPEAKERS
Ehrik Aldana
Tech Policy Product Manager
Ian Eisenberg
Head of Data Science
Machine Learning Engineer & Cognitive Neuroscientist
SPEAKERS
Ehrik Aldana
Tech Policy Product Manager
Ian Eisenberg
Head of Data Science
Machine Learning Engineer & Cognitive Neuroscientist

Register Now

You may also like

AI Governance 101
webinar

How NIST Pioneered GenAI Controls—and How to Operationalize Them

Chances are, you’ve felt the expanding mandate for AI usage at your company, with GenAI being embedded in every department and function. But unapproved usage or "shadow AI" is skyrocketing, with over 50% of employees using unapproved generative AI tools at work, according to a Salesforce study. On 29 April 2024, the National Institute of Standards and Technology (NIST) released its initial public draft of the AI Risk Management Profile for GenAI, which defines a group of risks that are novel to or exacerbated by the use of Generative AI (GenAI), and provides a set of actions to help organizations manage these risks at the use case level to power scalable, safe GenAI adoption.The trailblazing new draft AI RMF GenAI Profile was developed over the past twelve months and drew on input from the NIST generative AI public working group of more than 2,500 members, of which Credo AI is a member, as well as the prior work on NIST’s overarching AI Risk Management Framework.Credo AI is excited to present this webinar, explaining these newly defined GenAI risks and controls, as well as how to approach comprehensive AI governance and risk management for enterprises of all sizes that want to safely deploy GenAI tools. Watch this webinar to learn:• An overview of newly published GenAI governance documents, with a deep dive into NIST AI 600-1• How to apply GenAI controls to high-risk AI scenarios: high-risk AI industries and use case examples• Contextual AI governance: why you should apply controls, and manage AI risk, at the use-case level