AI Compliance

The New Era of Safe AI with Dragos Tudorache

April 22, 2024

The debate about how AI safety took center stage globally in 2023 with the advent of generative AI, and in the European Union has culminated in the thorough and sensible EU AI Act, which reached its final approval on March 13, 2024. This marks a turning point for AI, and heralds a new era for technology where powerful and high-risk AI systems are expected to have guardrails by design.

At Credo AI, we believe that AI is the ultimate competitive advantage for modern enterprises. At the same time, AI without guardrails can backfire — whether it’s shutting down expensive facial recognition systems, or dealing with out of control LLMs that send stock prices into a tailspin.

Public AI pitfalls have led to rapidly eroding trust in AI, with 52% of Americans believing AI is “not safe or secure”.How do you establish trust and safety in AI?

What we’ve learned at Credo AI from years of research is that there is not one box to check. AI adoption at enterprises requires constant oversight, usually by a directly responsible individual with executive support. Ensuring an AI system is safe for organizational use — whether built, bought, or procured — requires AI-specific Governance, Risk, and Compliance (GRC) workflows.

Join us as our CEO and Founder Navrina Singh meets with Member of European Parliament Dragos Tudorache about how and why trustworthy AI by design is critical for global economic and societal safety and security.

Attend this webinar to learn:

• An overview of the historic EU AI Act

• What is a risk-based approach and what does it mean for enterprises globally

• The current state of AI safety, and how regulation can help bring trust to the ecosystem and enterprises

SPEAKERS
Navrina Singh
Founder & CEO
Dragos Tudorache
Member of the European Parliament at European Parliament
Member of the European Parliament at European Parliament
SPEAKERS
Navrina Singh
Founder & CEO
Dragos Tudorache
Member of the European Parliament at European Parliament
Member of the European Parliament at European Parliament

Register Now

You may also like

AI Governance 101
webinar

How NIST Pioneered GenAI Controls—and How to Operationalize Them

Chances are, you’ve felt the expanding mandate for AI usage at your company, with GenAI being embedded in every department and function. But unapproved usage or "shadow AI" is skyrocketing, with over 50% of employees using unapproved generative AI tools at work, according to a Salesforce study. On 29 April 2024, the National Institute of Standards and Technology (NIST) released its initial public draft of the AI Risk Management Profile for GenAI, which defines a group of risks that are novel to or exacerbated by the use of Generative AI (GenAI), and provides a set of actions to help organizations manage these risks at the use case level to power scalable, safe GenAI adoption.The trailblazing new draft AI RMF GenAI Profile was developed over the past twelve months and drew on input from the NIST generative AI public working group of more than 2,500 members, of which Credo AI is a member, as well as the prior work on NIST’s overarching AI Risk Management Framework.Credo AI is excited to present this webinar, explaining these newly defined GenAI risks and controls, as well as how to approach comprehensive AI governance and risk management for enterprises of all sizes that want to safely deploy GenAI tools. Watch this webinar to learn:• An overview of newly published GenAI governance documents, with a deep dive into NIST AI 600-1• How to apply GenAI controls to high-risk AI scenarios: high-risk AI industries and use case examples• Contextual AI governance: why you should apply controls, and manage AI risk, at the use-case level