AI Compliance

EU AI Act Political Agreement: What You Need to Know for 2024

After a marathon of EU AI Act Trilogue negotiations, the political agreement on the EU AI Act on 8 December, 2023 is a historic moment for AI governance globally

December 21, 2023
Author(s)
Evi Fuelle
Lucía Gamboa
Contributor(s)
No items found.

The 8 December, 2023 political agreement on the EU AI Act is a historic moment for AI governance - the marathon negotiations continued over the course of three days, with negotiators announcing the deal late Friday night.  

European Commission President Ursula von der Leyen stated that the EU’s AI Act “will make a substantial contribution to the development of global guardrails for trustworthy AI,” while Commissioner Thierry Breton referred to the agreement as “historic.” Parliamentary co-rapporteurs MEP Brando Benifei and MEP Dragos Tudorache, both of whom spoke at Credo AI’s Leadership Responsible AI Summit in November, have lauded the political agreement as “the world’s first horizontal legislation on artificial intelligence” that “sets rules for large, powerful AI models… that will significantly impact our digital future.” 

The December 8th agreement reflects years of hard work by the European Parliament, the Council, and the Commission to develop comprehensive regulation on artificial intelligence that stimulates further innovation and investment, while meeting the urgency of the moment in creating real and concrete guardrails which require transparency and accountability of these systems. 

Credo AI is proud to have contributed to the EU AI Act since its inception, providing input to the High Level Expert Group (HLEG) on AI in 2018, and engaging in constructive discussions with both the European Commission and the European Parliament to share our insights and learnings in operationalizing Responsible AI Governance for industry.

While the final text of the EU AI Act is not expected to be  published until Spring 2024, enterprises should begin to assess which of their current and planned AI systems and models fall in scope of the expected AIA, and conduct a gap analysis against key requirements. Credo AI is here to help - watch our webinar, EU AI Act Update: What We Know and How to Prepare,” and start getting up to speed with three key things you need to know:  

1. What happened on 8 December, 2023, and what happens next?

  • Under the leadership of the Spanish Council Presidency, political agreement on the EU AI Act was reached between the European Parliament, European Commission, and the Council, and the Trilogue process was concluded. 
  • The majority of the text for the EU AI Act was agreed upon, but some technical details remain which must be finalized in technical meetings in the coming weeks. Then, according to the EU’s Ordinary Legislative Procedure, the Parliament’s Internal Market and Civil Liberties Committees will vote on the final text (expected end of January, 2024)
  • Twenty  days after the European Parliament’s vote, the final text of the EU AI Act will be published in the Official Journal of the EU (expected Spring 2024), and enforcement timelines begin. 
  • In the interim, the European Commission has also launched an "AI Pact" initiative which is intended to bring together developers from around the world to create and commit to a voluntary basis "to implement key obligations of the AI Act ahead of the legal deadlines." 

2. Who will the EU AI Act affect?

  • The requirements in the EU AIA are expected to apply not only to European companies, but to all AI systems impacting people in the EU, including any company placing an AI system on the EU market, or companies whose system outputs are being used within the EU, giving these requirements global implications.  

  • The EU AIA is broad in scope with significant obligations along the value chain. It focuses on the impact of AI systems on people and protection of fundamental rights.

3. What are the main elements of the Dec. 8, 2023 text with political agreement? 

  • Definition of AI: The definition of AI in the EU AI Act is aligned to recently updated OECD definition, which is: “AI system = An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

  • Risk-based approach: Four categories of risk were agreed upon: prohibited, high, limited, and minimal. The EU AI Act will take a tiered approach to ensure safety for more powerful systems while reducing compliance burden for less-powerful systems.

  • Scope: Focused on providers and downstream users of AI systems. Providers will need to self-assess their AI systems to identify the level of compliance. Conformity assessments will be required for substantial modifications to a system’s intended purpose by the original provider or a third party.

  • Prohibited use cases: Legislators agreed on the “unacceptable risk” category of AI systems that will be banned from the EU market. Systems with this designation include those that manipulate human behavior affecting free will, social scoring, and "certain elements of predictive policing." Remote biometric identification is expressly prohibited except for judicially-approved law enforcement uses. Emotion-recognition technology in the workplace and school systems will also be prohibited. 
  • Exemptions: National security purposes outside the scope of EU law-making authority such as military and defense, law enforcement, scientific research and development, and open-source systems –unless their use is classified as prohibited or high-risk.
  • High-risk AI: High-risk AI systems are those used as a safety component of a product and AI systems that pose a significant risk of harm to health, safety, or fundamental rights. High-risk systems will have to perform conformity assessments before being placed on the market.
  • Generative AI: Specific transparency and disclosure requirements. Individuals must be informed when interacting with Al and Al generated content must be labeled and detectable.
  • General Purpose AI (GPAI): Requirements for GPAl and Foundation Models include transparency obligations such as providing technical documentation, providing detailed summaries about the content used for training, and complying with EU copyright law.
  • High-impact GPAI: In addition to requirements for GPAI, “powerful” models posing a systemic risk must also perform model evaluations, assess and mitigate systemic risks, and document and report to the European Commission any serious incidents and the corrective action taken. Developers of high-impact GPAI will also be required to conduct adversarial training of the model, ensure an adequate level of cybersecurity and physical protections, and document and report the estimated energy consumption of the model.
  • Enforcement: Timelines range between 6-24 months. Prohibited AI enforcement will begin six months after the final AIA text is published, the provisions on transparency and governance requirements will enter into force twelve months after, and all other provisions will enter into force two years after.
  • Penalties: Up to 7% of global annual turnover or €35m for prohibited Al violations, up to 3% of global annual turnover or €15m for most other violations, and up to 1.5% of global annual turnover or €7.5m for supplying incorrect information.

Credo AI is Here to Help You Prepare

There are common elements of the AI governance journey that enterprises can benefit from adopting today. The EU AI Act will require enterprises to have clear visibility into and control over where and how they're using AI. Our AI Registry and Credo AI Governance Platform are critical solutions for any enterprise to easily implement an AI risk management system and take initial steps needed to ensure compliance for the EU AI Act, international standards, U.S. state and federal-level AI requirements and more. 

Credo AI is ready to support enterprises in their Responsible AI Governance journey. We have built a robust Responsible AI Governance platform that supports contextual AI governance in companies of all sizes from Global 2000s to early stage startups, across different industries including financial services, insurance, healthcare, human resources, public sector use of AI, and more.

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.