At Credo AI, we are committed to assisting our customers in safely adopting responsible AI at scale using our Responsible AI Governance platform.
As part of our work to operationalize Responsible AI policy, our policy and product teams collaborate closely with policymakers, regulatory bodies, standard setting organizations, and key stakeholders from supra-national level institutions like the European Union (EU), the Organisation for Economic Co-operation and Development (OECD), and the North Atlantic Treaty Organization (NATO) in order to better understand modern, flexible, global approaches to AI governance.Â
Preparing our customers for the expected EU Artificial Intelligence Act (AIA), including providers and downstream developers of General Purpose AI Systems (GPAIS) remains a top priority for our team, so that businesses can continue to design, develop, and deploy Responsible AI around the world - including the European Single Market.Â
Key takeaways:
- The European Parliament will vote to reach political agreement on the EU AIA on April 26th, and it is highly likely that the European Parliamentâs latest version of the EU AIA text will include new provisions concerning General Purpose AI Systems (GPAIS), adding a new framework and safeguards which will place obligations on both GPAI providers and downstream developers.Â
- These new obligations will most likely include testing and technical documentation requirements, requiring GPAIS providers to test for safety, quality, and performance standards, and expectations for both GPAIS providers and GPAIS downstream developers be able to describe ways to understand the model in a more comprehensive way via technical documentation (the model must be safe and understandable). This documentation could be akin to the format known as âAI model cards,â and may be expected to include information on performance, cybersecurity, risk, quality, and safety.Â
The bottom line:
- If you are a provider of a GPAI model (i.e. companies like Anthropic, Baidu, Google, Meta, NVIDIA, or OpenAI) or a downstream developer of an application based on GPAI (i.e. companies like Copy.ai, HyperWrite, Jasper, Mem, or Writer), you need to start preparing for the EU AIA.Â
- Credo AI is part of the solution. We help organizations adopt Responsible AI practices at scale, enabling them to take advantage of the capability and efficiency offerings of GPAIS, while efficiently complying with the EU AIA to ensure seamless access to the worldâs largest trade bloc.Â
The strength of the European Single Market and the Brussels Effect.Â
Europeâs Single Market is home to 23 million companies and 450 million consumers (with a GDP per head of âŹ25,000), and represents 18% of world gross domestic product (GDP). The EU remains the largest economy in the world and the worldâs largest trade bloc - accounting for almost one third of total world trade.Â
The General Data Protection Regulation (GDPR) set a global precedent for privacy protections in 2018, and the upcoming EU AIA is expected to have a similar global impact - known as the "Brussels Effect" - on the development and deployment of AI.
Credo AI spent the past few days on the ground in Brussels engaging with a diverse group of key stakeholders to better understand the current scope of the EU AIA, namely: what revisions to the text may be considered in light of the recent release and subsequent widespread adoption of large language models like Open AIâs ChatGPT.Â
The Credo AI team also discussed the timing for the âProposal for a Regulation Laying Down Harmonised Rules On Artificial Intelligence (the EU Artificial Intelligence Actâ to move forward from current negotiations to a vote in Parliament, political trilogues, adoption and enforcement. â
What is the EUâs Approach to General Purpose AI Systems?
First, the initial European Commission âProposal for a Regulationâ in April 2021 largely exempted GPAIS from the scope of the EU AIA without a predetermined âhigh riskâ end use case. Initially, in the âProposal for a Regulation Laying Down Harmonised Rules On Artificial Intelligence (the EU Artificial Intelligence Act)â proposed by the European Commission on 21 April 2021, Article 28 included an exemption clause for developers of General Purpose AI Systems (GPAIS), allowing GPAIS developers to avoid certain documentation and accountability obligations if the AI system in question did not have a predetermined use or context that could be considered âhigh-risk.â In other words, GPAIS would not be classified as âhigh-riskâ AI unless developers made significant modifications or adaptations to the AI system for purposes categorized as âhigh-risk.âÂ
Then, the Council of the European Unionâs proposal in December 2022 aimed to defer the categorization and resulting requirements for GPAIS to a later date, adding an âimplementing actâ that would eventually specify how the requirements for high-risk AI systems could apply to GPAIS. Following the release of the Commissionâs proposal, the Council of the European Union weighed in on the proposed EU AIA.Â
Taking into account positions of the various EU Member States, the Czech Presidency of the European Council released the Council's position at its meeting on 6 December 2022. In this version, the Council proposed the following text in regard to GPAIS:Â
3.1 A new Title IA has been added to account for situations where AI systems can be used for many different purposes (general purpose AI), and where there may be circumstances where general purpose AI technology gets integrated into another system which may become high-risk. The compromise text specifies in Article 4b(1) that certain requirements for high risk AI systems would also apply to general purpose AI systems. However, instead of direct application of these requirements, an implementing act would specify how they should be applied in relation to general purpose AI systems, based on a consultation and detailed impact assessment and taking into account specific characteristics of these systems and related value chain, technical feasibility and market and technological developments. The use of an implementing act will ensure that the Member States will be properly involved and will keep the final say on how the requirements will be applied in this context.
â
3.2 Moreover, the compromise text of Article 4b(5) also includes a possibility to adopt further implementing acts which would lay down the modalities of cooperation between providers of general purpose AI systems and other providers intending to put into service or place such systems on the Union market as high-risk AI systems, in particular as regards the provision of information.
Most recently, the European Parliament proposed text changes to the AIA that would place more accountability and transparency requirements on both GPAI developers and GPAI providers. Given the recent (rampant) adoption of large language models like Open AIâs ChatGPT and resulting harms, the European Parliament has extensively discussed changes to the Commissionâs Proposal in order to address risks specific to generative AI (genAI) and Large Language Models (LLMs) which may not have a predetermined end use case and yet still be high risk.Â
For its part, the European Parliament seems likely to propose a set of safeguards for both GPAI developers and downstream providers along these lines:
â
- GPAI developers would be required to conduct testing for reasonably foreseeable risk and expected outcomes;
- in addition to basic testing, GPAI developers would be required to test their model for specific safety and performance standards and quality;
- downstream providers of GPAI would be expected to produce extensive documentation that allows external actors to understand the model in a comprehensive way - akin to AI model cards - including information on the modelâs performance, cybersecurity safeguards, risk assessment, quality assurance, and safety; and,Â
- the Regulation would consider a total lifecycle management approach to GPAIS, including the continued provision of information about any significant updates or changes to the model.Â
Indications thus far suggest that the EU has an intent to address the risks of GPAI in the EU AI Act, attempting to strike a balance between prescribing what the design of such tests and technical documentation or AI model cards should be, while recognizing the need for more public transparency on elements such as GPAI intended uses, potential unexpected outcomes, and model training data.
It seems likely that the EU will keep the focus on the context of AI applications to determine the severity of risk, while at the same time addressing the implications of GPAI foundational models for the complete ecosystem of AI applications.Â
Details and specifics on exactly what information GPAI developers and providers will be required to provide in this documentation for GPAI systems might not be included in the Parliamentâs version of the text, and in fact could still be addressed in an additional annex or implementing act attached to the EU AI Act.Â
So what will the final text of the EU AI Act actually say about GPAIS?
Prior to the publication of the final text, we can only speculate. What is clear is that the European Union has recognized the need to address the unique risks and unintended outcomes of GPAIS, and will attempt to place more accountability and transparency requirements on both GPAI developers and GPAI providers.Â
What happens next?
The European Parliament is expected to vote on the latest proposal for EU AI Act text changes on April 26th. The exact date that the EU AI Act will be enforceable depends on âwhen trains leave the stationâ at different points of the EU policy-making process - including when the European Parliament finalizes their version of the text, when âtriloguesâ begin (likely a 4-5 month process), and what the final decision on the length of the implementation phase for the final EU AIA will be (currently projected to be a two-year implementation phase).Â
The EU AI Act is a top priority for the European Commission, and there is widespread political agreement that agreement on the text of the Regulation is expected to be completed by the end of 2023 (during the Spanish Council Presidency). Companies using AI in any form should expect to use this time to prepare, ensuring they are ready to be fully compliant with the EU AI Act as soon as possible.Â
Conclusion
It is highly likely that the Parliamentâs latest version of the text proposed for the EU AIA will include new provisions concerning general purpose AI, potentially adding a new framework and safeguards within the Act which will place obligations on both developers and providers of GPAIS. These obligations will include testing and technical documentation requirements, requiring developers to test for safety, quality, and performance standards, and expecting providers to describe ways to understand the model in a more comprehensive way via technical documentation (the model must be safe and understandable). This documentation could be akin to the format known as âAI model cards,â and include information on performance, cybersecurity, risk, quality, and safety.Â
With optimism about the European Parliament's ability to reach political agreement in its key vote on April 26th, we believe there is strong momentum going forward to make the final EU AI Act a reality in 2023 regardless of the exact date for Parliamentary Agreement. We expect the AIA to be finalized by December of 2023 at the latest, and enforceable as soon as 2025 (with fines of up to âŹ30 million or 6% of a companyâs total worldwide annual turnover for the preceding financial year, whichever is higher).Â
In addition to leveling higher fines for non-compliance than GDPR, the separate EU AI Liability Directive (proposed by the European Commission in September 2022) will use the same definitions as the AIA, making the documentation and transparency requirements of the AIA also operational for liability. This means that non-compliance with the requirements of the AIA would also trigger protections provided under the AI Liability Directive (increasing the risk that your business could be held liable for damage caused by an AI system).
We believe these next months will be critical - and we at Credo AI see the urgency for anyone active in the EU, both public as well as private organizations, to start their journey today in building organizational readiness for AI Governance. Credo AI is part of the solution, and we look forward to helping organizations adopt Responsible AI practices at scale, enabling them to take advantage of the capability and efficiency offerings of GPAIS, and efficiently comply with the EU AIA to ensure seamless access to the worldâs largest trade bloc.Â
Schedule a demo with our team today to discover how Credo AI can assist your organization in meeting compliance requirements of the EU AI Act.Â
â
DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.