AI Compliance

EU AI Act Update - What We Know and How to Prepare

December 13, 2023

On Dec 8, 2023, the institutions of the European Union (EU) reached political agreement on the Artificial Intelligence (AI) Act following months of intense negotiations.

The requirements in the EU AIA are expected to apply not only to European companies but to all AI systems impacting people in the EU, including any company placing an AI system on the EU market or companies whose system outputs are being used within the EU, giving these requirements global implications. This means that enterprises deploying AI technology should begin putting a risk management system, quality management, and AI governance systems in place now in order to continue to operate with speed and efficiency in Europe.

At Credo AI, as we've watched commercial AI rapidly develop over the last several years, it is clear that the age of AI — and AI governance — is beginning. The AI Act is a harbinger of the types of requirements companies need to abide by globally, both for consumer safety and to ensure internal safety and effective use of AI.

Watch this Credo AI EU AI Act briefing to learn:
• What happened Friday, Dec 8, 2023, and what's next for the EU AI Act
• Background and scope of the EU AI Act
• High Risk AI examples and expected requirements
• How to get started with an AI risk management system, and human oversight

SPEAKERS
Evi Fuelle
Global Policy Director
Lucía Gamboa
Policy Manager
Susannah Shattuck
Head of Product
SPEAKERS
Evi Fuelle
Global Policy Director
Lucía Gamboa
Policy Manager
Susannah Shattuck
Head of Product

Register Now

You may also like

AI Governance 101
webinar

How NIST Pioneered GenAI Controls—and How to Operationalize Them

Chances are, you’ve felt the expanding mandate for AI usage at your company, with GenAI being embedded in every department and function. But unapproved usage or "shadow AI" is skyrocketing, with over 50% of employees using unapproved generative AI tools at work, according to a Salesforce study. On 29 April 2024, the National Institute of Standards and Technology (NIST) released its initial public draft of the AI Risk Management Profile for GenAI, which defines a group of risks that are novel to or exacerbated by the use of Generative AI (GenAI), and provides a set of actions to help organizations manage these risks at the use case level to power scalable, safe GenAI adoption.The trailblazing new draft AI RMF GenAI Profile was developed over the past twelve months and drew on input from the NIST generative AI public working group of more than 2,500 members, of which Credo AI is a member, as well as the prior work on NIST’s overarching AI Risk Management Framework.Credo AI is excited to present this webinar, explaining these newly defined GenAI risks and controls, as well as how to approach comprehensive AI governance and risk management for enterprises of all sizes that want to safely deploy GenAI tools. Watch this webinar to learn:• An overview of newly published GenAI governance documents, with a deep dive into NIST AI 600-1• How to apply GenAI controls to high-risk AI scenarios: high-risk AI industries and use case examples• Contextual AI governance: why you should apply controls, and manage AI risk, at the use-case level