Get ready for the 
EU AI Act with Credo AI

Implementing the EU AI Act can be challenging for organizations of any size, especially as the first comprehensive AI regulation of its kind.

At Credo AI, we understand the challenges and are ready to assist you in turning challenges into opportunities for better business. With our Responsible AI Governance Platform, you can prepare for the forthcoming EU AI Act by:

Staying up-to-date with the EU AI Act, ensuring your systems follow existing and emerging compliance needs.
Cataloging your AI Systems to identify your risk categories (unacceptable, high, limited, or no risk.)
Create an actionable plan to get into compliance with a readiness assessment and tailored checklist for each AI System.

What is the EU AI Act?

The EU AI Act is currently a draft regulation that was proposed by the European Commission on 21 April 2021. This regulation consists of a comprehensive set of rules for providers and deployers of AI systems, which details what obligations each entity has when using or deploying artificial intelligence in the European Union and is expected to pass by the end of 2023.

Fines up to 7% of global revenue or 40 million euros, whichever is higher.

What are businesses responsible for doing?

As an organization using or building AI systems, you are responsible for ensuring compliance with the EU AI Act and should be using this time to prepare. Overall, the information you are responsible for providing to both the public and the European Commission will depend on the risk level of your AI use case and additional context regarding how your AI system was built and trained on data.

Depending on the risk threshold of your systems, some of your responsibilities could include:

Providing a declaration of conformity, stating that your system has been assessed by a notified body within the European Union, and that you have affixed either a physical or digital CE marking to your AI system.

Providing technical documentation that includes information such as:

  • A general description of the AI system
  • A detailed description of the elements of the AI system & process for its development
  • Detailed information about the monitoring, functioning, and control of the AI system, in particular with regard to: performance, accuracy, intended purpose, foreseeable unintended outcomes and sources of risks to health and safety, fundamental rights, and discrimination, human oversight measures needed in accordance with Article 14, interpretation of the outputs of AI systems by the users, specifications on input data, as appropriate
  • A detailed description of the risk management system
  • A description of any change made to the system throughout its lifecycle

Providing a list of harmonized standards applied in full.

Providing a detailed description of the post-market monitoring plan.

Conducting a risk assessment to determine the level of risk associated with your AI system. Ensuring that your AI system complies with the specific requirements for its level of risk, and providing transparency and disclosure about your AI system as required.

Find out how the EU AI Act will impact your organization.

How Credo AI can support you 
in preparing for the EU AI Act

Register your AI Systems

Register your organization’s AI Use Cases in our AI Registry to be able to see whether they are “high risk” (or minimal/limited risk)

Map Requirements

Get access to a  list of yes/no questions that map to the requirements of the EU AI Act and a report template that provides a “readiness score” that aligns with the % of EU AI Act requirements that the organization is meeting.

Reporting and Monitoring

Learn how “ready” for EU AI Act compliance a particular Use Case is and provide a clear list of all of the things that still need to be done to get in compliance with the EU AI Act

FAQs

What is the origin of this legislation (where did the “EU AI Act” come from)?

The EU’s approach to artificial intelligence centers on excellence and trust, aiming to boost research and industrial capacity while ensuring safety and fundamental rights.

Since April 2018, the three governance bodies of the European Union - the European Commission, Parliament, and Council - have considered how to comprehensively regulate Artificial Intelligence in the European Union’s Single Market.

In June of 2018, the European Commission appointed fifty-two experts (from academia, business, and civil society), to its “High Level Expert Group on Artificial Intelligence” (HLEG), designed to support the implementation of the EU Communication on Artificial Intelligence (published in April 2018). This HLEG focused on outlining a human-centric approach to AI, and designed a list of seven key requirements that AI systems should meet in order to be trustworthy (in its Ethics Guidelines for Trustworthy AI):

  1. Human agency and oversight;
  2. Technical Robustness and safety;
  3. Privacy and data governance;
  4. Transparency;
  5. Diversity, non-discrimination and fairness;
  6. Societal and environmental well-being; and,
  7. Accountability.

The mandate of the AI HLEG ended in July 2020 with the presentation of two more deliverables: 

Then, in April 2021, the European Commission presented its “AI package,” which included:

The European Union (EU) “AI Act,” is the more commonly referred to name for the “Proposal for a Regulation Laying Down Harmonised Rules On Artificial Intelligence,” proposed by the European Commission on 21 April 2021. 

Why is the EU AI Act relevant for non-European companies?

Broadly speaking, any AI system that is developed by an EU provider, wherever in the world it is deployed, as well as systems that are developed outside of the EU and put onto the EU market - are within the purview of the EU AI Act (meaning there are obligations that the AI system must comply with as required by the EU AI Act). 

The EU AI Act’s extraterritoriality - meaning its application outside of the European Union national borders - is expansive. The EU AI Act applies to AI systems that are developed and used outside of the EU, if the output of those systems is intended for use in the EU. 

Many AI providers and users based outside the EU, including those in the United States, will find their system outputs being used within the EU, and such entities will therefore, fall under the purview of the EU AI Act.

What AI systems classify as high-risk?

In the original text of the “Proposal for a Regulation Laying Down Harmonised Rules On Artificial Intelligence,” as written by the European Commission, there are four main categories of risk identified (as defined in Chapter II, Annex III), which identify enterprises that must comply with obligations per the EU AIA. The four levels of risk are defined as follows:

The EU AI Act uses a risk-based approach to regulation, which has different compliance obligations for AI systems based on their level of risk. The EU considers four levels of risk:

  1. Minimal or no risk systems: These are systems that have minimal or no impact on individuals' rights, safety, or interests. These systems are subject to a set of light-touch transparency obligations.
  2. Limited risk systems: These are systems that pose some risk to the rights, safety, or interests of individuals, but that risk is limited. These systems include those that (i) interact with humans, (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content.These systems are subject to transparency obligations, but they may also need to undergo a conformity assessment before they can be placed on the market.
  3. High-risk systems: These are systems that can significantly impact the rights, safety, or interests of individuals. These systems include those used in critical infrastructure, transport, and healthcare, as well as those used for law enforcement and border control.  These systems are subject to transparency, conformity assessment, and specific requirements related to data quality, fundamental rights, human oversight, and cybersecurity.
  4. Unacceptable risk systems: These are systems that are prohibited by law, such as those that enable real-time remote biometrics in public spaces, social scoring, or the manipulation of individuals without their knowledge or consent.

In the most recent changes to the original European Commission text proposed by the European Parliament, the categorical risk was also proposed to be added to the Regulation, specifically that “foundation model providers” and “providers who specialize a foundation model into a generative AI system”  (as defined in the revisions proposed to Article 28b) would additionally be identified as categories of enterprises that must comply with obligations per the EU AIA. 

Adopt AI with confidence today

The Responsible AI Governance Platform enables AI, data, or business teams to track, prioritize, and control AI projects to ensure AI remains profitable, compliant, and safe.