AI Compliance

What is the EU AI Act? Frequently asked questions, answered.

This factsheet is intended to answer some of the most common questions about the EU AI Act, providing essential insights to help businesses prepare for compliance and navigate the evolving landscape of AI regulation successfully.

May 18, 2023
Author(s)
Evi Fuelle
Contributor(s)
Ehrik Aldana

For businesses operating within any of the twenty-seven countries that make up the European Union, understanding and complying with the EU AI Act will be the key to successfully developing and deploying AI in Europe (avoiding penalties and actively contributing to the responsible deployment of AI worldwide). This factsheet is intended to answer some of the most common questions about the EU AI Act, providing essential insights to help businesses prepare for compliance and navigate the evolving landscape of AI regulation successfully.

What is the EU AI Act?

The European Union (EU) “AI Act,” also known as the “Proposal for a Regulation Laying Down Harmonised Rules On Artificial Intelligence,” is currently a draft Regulation that was proposed by the European Commission on 21 April 2021. This regulation consists of a comprehensive set of rules for AI users, developers, and providers, which details what obligations each entity has when using or deploying artificial intelligence in the European Union. 

Why was it proposed?

The EU AI Act was proposed to address two fundamental objectives: 

  1. harnessing the immense potential of artificial intelligence (AI) for societal and industrial benefits; and, 
  2. ensuring the protection of individuals and promoting responsible AI practices. 

Recognizing the significant advantages that AI brings to various sectors, the EU AI Act is the European Union’s approach to establishing a robust regulatory framework that strikes a balance between innovation and the safeguarding of every citizen’s fundamental human rights. 

What companies are “in scope” for the EU AI Act?

The extraterritorial impact of the AIA will vary widely between sectors and applications, but determining which companies are “in-scope” can largely be based on a determination of whether or not the AI system being used, developed, or deployed is “high-risk.” The following sectors and use cases have been identified as “high-risk”: 

  • Unacceptable risk: such as social credit scoring systems and remote biometric identification in publicly accessible spaces for law enforcement purposes
  • High risk: critical infrastructures (like transport), education (like exam scoring), employment (such as resume-screeners for hiring), financial credit scoring, law enforcement and administration of justice, border control (such as verification of the authenticity of travel documents), and all remote biometric facial identification systems.   
  • Limited risk: chatbots, for example.
  • Minimal or no risk: AI-enabled video games or spam filters.

Given the most recent European Parliament amendments (agreed to by the Parliament on 11 May, 2023), the EU AI Act is likely to also include specific transparency and reporting requirements for providers of foundation models, which are defined as “AI models trained on broad data at a scale that is designed for the generality of output, and can be adapted to a wide range of distinctive tasks,” as well as general purpose AI systems, which are defined as “an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed.” Read our blog post on GPAIS and the EU AI Act to learn more.

Globally, AI systems in regulated products will be significantly affected, and the EU AI Act’s transparency requirements for AI that interact with humans (such as chatbots and emotion detection systems) will most likely lead to global disclosure on most websites and apps.

High-risk AI systems for human services will be highly influenced if they are built into online or otherwise internationally interconnected platforms, but many AI systems that are more localized or individualized will not be significantly affected.

What does the “risk-based approach” mean in regard to the EU AI Act?

The EU AI Act outlines a risk-based approach where the obligations for an AI system are proportionate to the level of risk that the AI system poses (including how it was designed and what it is intended to be used for). According to the risk level of the AI system, the EU AI Act outlines documentation, auditing, transparency and process requirements accordingly. The AI Act defines four levels of risk as follows:

  1. Low-risk systems: These are systems that have minimal or no impact on individuals' rights, safety, or interests. These systems are subject to a set of light-touch transparency obligations.
  2. Limited or minimal risk systems: These are systems that pose some risk to the rights, safety, or interests of individuals, but that risk is limited. These systems are also subject to transparency obligations, but they may also need to undergo a conformity assessment before they can be placed on the market.
  3. High-risk systems: These are systems that can significantly impact the rights, safety, or interests of individuals. These systems include those used in critical infrastructure, transport, and healthcare, as well as those used for law enforcement and border control.  These systems are subject to transparency, conformity assessment, and specific requirements related to data quality, fundamental rights, human oversight, and cybersecurity.
  4. Unacceptable risk systems: These are systems that are prohibited by law, such as those that enable social scoring or that manipulate individuals without their knowledge or consent.  “Social scoring” used by public authorities or real-time remote biometrics in public spaces used by law enforcement is prohibited (excepting some specific circumstances). 

What are businesses responsible for doing?

Establishing governance processes, building competence, and implementing technology for efficient compliance takes time and effort. As an organization using or building AI systems, you are responsible for ensuring compliance with the EU AI Act, and should be using this time to prepare. Overall, the information you are responsible for providing to both the public and the European Commission will depend on the risk level of your AI use case, and additional context regarding how your AI system was built and trained on data. 

Depending on the risk threshold of your systems, some of your responsibilities could  include:

1. Providing a certificate of conformity, stating that your system has been assessed by a notified body within the European Union, and that you have affixed either a physical or digital CE marking to your AI system. 

2. Providing technical documentation that includes information such as: 

  • A general description of the AI system
  • A detailed description of the elements of the AI system & process for its development
  • Detailed information about the monitoring, functioning, and control of the AI system, in particular with regard to: performance, accuracy, intended purpose, foreseeable unintended outcomes and sources of risks to health and safety, fundamental rights, and discrimination, human oversight measures needed in accordance with Article 14, interpretation of the outputs of AI systems by the users, specifications on input data, as appropriate
  • A detailed description of the risk management system
  • A description of any change made to the system throughout its lifecycle

3. A list of harmonized standards applied in full.

4. A detailed description of the post-market monitoring plan.

5. Conducting a risk assessment to determine the level of risk associated with your AI system. Ensuring that your AI system complies with the specific requirements for its level of risk, and providing transparency and disclosure about your AI system as required.

Not sure how the EU AI Act will impact your business? Take our Credo AI EU AI Act assessment and discover today. 

What are the penalties for non-compliance?*

Penalties vary between different types of risks. These include:

Category Requirements Penalties
Unacceptable Risk: Prohibited These uses are prohibited. Fines up to 6% of global revenue or 30 million euros, whichever is higher.
High-risk Systems: Conformity Assessment Providers of high-risk systems must perform a conformity assessment to make sure that they are compliant with requirements, including:
· Risk management system
· Data requirements
· Technical documentation
· Record-keeping
· Transparency on the system’s functioning
· Human oversight
· Accuracy, robustness, and cybersecurity
· Post-market monitoring
Fines up to 4% of global revenue or 20 million euros, whichever is higher, for everything except the data requirements, where the same fines apply to the prohibited systems.
Limited Risk: Transparency obligations Notify the user that they are engaging with an AI system. Fines up to 4% of global revenue or 20 million euros, whichever is higher.
Minimal risk: Voluntary Codes of Conduct Providers can choose to comply with voluntary codes of conduct. Not applicable as there are no requirements.

When will the AI Act come into effect?

The exact date when the EU AI Act will be enforceable depends on various stages of the EU policy-making process. This date will depend on when "trilogues" are completed (a process by which agreement on the text is reached between the European Parliament, Council, and Commission). As a result of the trilogues, a final decision will be made on the length of the “implementation phase” for the final EU AI Act (which will be the length of time that companies have to get in compliance with the Act after it is published as a Regulation - this could be as soon as within six months). 

The EU AI Act is a top priority for the European Commission. On Thursday, May 11 (2023), the European Parliament’s Civil Liberties and Internal Market committees jointly adopted the text of proposed changes to the European Commission’s “Proposal for a Regulation Laying Down Harmonised Rules On Artificial Intelligence” (the EU AI Act)  by large majority (84 votes in favor, seven against, and twelve abstentions). The fact that the European Parliament reached a political agreement on its version of the Proposed Regulation text on 11 May, 2023 means that the discussions are moving forward to a final stage of the Regulation.

Now, the European Parliament can move forward with a vote on plenary adoption, tentatively set for June 14 (2023). Once approved in plenary, the proposal will enter the last stage of the European Union legislative process, kicking off final negotiations with the EU Council and Commission (so-called trilogues). It’s expected that the trilogue process will be completed during the Spanish Council Presidency of the EU (June - December), and the text of the “Proposal for a Regulation Laying Down Harmonised Rules On Artificial Intelligence” (the EU AI Act) would be completely finalized no later than December 2023. Read more in our latest blog post

How can Credo AI help my organization with compliance?

Credo AI is committed to helping organizations comply with the EU AI Act through our Responsible AI Platform software. Our AI Governance solutions—Context Driven AI Governance Platform, Credo AI Lens™, and Policy Packs—can help ensure that your organization is prepared to track and report on compliance with the EU AI Act’s requirements. Accomplish the following with Credo AI:

  • Confidently register, assess, and track your AI use cases. We provide a comprehensive framework for assessing the risk level of your AI systems, enabling you to identify and prioritize areas for compliance.
  • Stay up-to-date with AI policy. Our Policy Packs track and operationalize AI laws, regulations, and industry standards, making sure your systems follow existing and emerging compliance needs.
  • Simplify the governance process. Our platform integrates with a variety of AI systems and tools, making it easy to incorporate responsible AI practices into your existing workflows.
  • Facilitate collaboration among cross-functional teams–such as data scientists, legal, compliance, and product teams–to promote holistic AI governance within your organization.
  • Promote trust and transparency by utilizing our platform’s customizable report templates for communicating AI system details to regulators, customers, and other key stakeholders.

Conclusion

The Act is expected to have a global impact, ensuring safer and more accountable deployment of AI. Proactive compliance can position companies to benefit from AI opportunities while minimizing risks. Our team is available to help you develop the processes and capabilities needed to get in compliance with the EU AI Act, as well as many more policies that lie ahead in 2023 and beyond. 

Request a demo today,  and learn more about how we can help you get started on your AI risk and compliance journey.

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.