AI Compliance

Policy Prototyping & Regulatory Sandboxes: “Testing Out” the EU AI Act

In early June, 2022, CredoAI had the opportunity to participate in the EU AI Act Open Loop program, an experimental governance initiative supported by Meta that recently won the CogX Award for Outstanding Achievements & Research Contributions in Tech Regulation.

September 5, 2023
Author(s)
Evi Fuelle
Contributor(s)
Catharina Doria
Susannah Shattuck
Ehrik Aldana

Executive Summary

Credo AI  has been chosen as one of a select group of small and medium-sized enterprises (SMEs) to participate in a trailblazing initiative called Open Loop, designed to put into practice and test various aspects of the EU AI Act. What does it look like for a small business to fulfill the “Technical Documentation” requirements outlined in Article 11 and Annex IV? What should the mandate of an AI regulatory sandbox be? These are just a few examples of the many questions posed during the exercise to understand further how the EU AI Act would act in practice with real enterprises ensuring their compliance.

For the Open Loop program, Credo AI participated in hands-on testing of the implementation of the EU AI Act and also provided preeminent thought leadership to a robust discussion on the implementation of the EU AI Act, including our experience working with industry, as well as our expertise in creating governance artifacts (transparency reporting, algorithmic impact assessments, algorithm design evaluations, model cards and more) for enterprises of all sizes, with a variety of AI use cases. Credo AI believes it is just as important to prototype and test policy as it is to test technology, and the guardrails we develop should go hand-in-hand with the technology itself. 

Open Loop Background 

In early June 2022, Credo AI had the opportunity to participate in the EU AI Act Open Loop program, an experimental governance initiative supported by Meta that recently won the CogX Award for Outstanding Achievements & Research Contributions in Tech Regulation. 

The Open Loop Program brings together regulators, governments, tech businesses, academics, and civil society to inform the AI governance debate with evidence-based policy recommendations from startups that have prototyped regulations, such as the EU AI Act. Through experimental governance methods, like policy prototyping, Open Loop members test new and/or existing governance frameworks to test emerging technologies, so stakeholders can better understand how well they’ll work in the real world. 

What is the Open Loop EU AI Act Program?

The Open Loop EU AI Act program is one of the most comprehensive  policy prototyping initiatives worldwide, engaging over 60 participants from 50 AI and ML companies. The program is structured into three pillars, assessing and scrutinizing key articles of the EU AI proposal.

Pillar 1: Operationalizing the "Requirements for AI Systems"

This pillar focuses on understanding the clarity and feasibility of implementing AI system requirements, including risk management and human oversight. It results in three reports:

  1. Report on Operationalizing AI System Requirements: Analyzing the feasibility of implementing requirements for AI systems. Here.
  2. Deep Dive Report on Risk and Transparency: Examining human oversight, transparency, and risk management requirements in the AI Act. Here.
  3. Report on Transparency Obligations for AI Systems: Assessing when and how individuals should be informed when interacting with an AI system. Here.

Pillar 2: AI Regulatory Sandboxes

This pillar explores regulatory sandboxes (Article 53) and their attractiveness to organizations developing new AI systems. It aims to test sandbox requirements, scope, and conditions to foster innovation and improve compliance. Here.

Pillar 3: Taxonomy of AI Actors

The third pillar presents an alternative taxonomy, evaluating the efficacy and suitability of the provider-user paradigm. Here.

Policy prototyping is a way to test-drive regulations.

During the inception phase of test-driving the EU AI Act (AIA), Credo AI had the opportunity to provide industry insights - based on our experience operationalizing Responsible AI Governance in practice - and recommendations on the European AI legal framework. Credo AI engaged with other key participants in the Responsible AI ecosystem, including smaller and medium sized enterprises developing and deploying machine learning tools, focusing on “practically testing” the five main areas outlined in the EU AI Act: 

  1. Taxonomy of actors;
  2. Risk management;
  3. Data quality requirements;
  4. Technical documentation; and,
  5. Human oversight.

Based on our unique experience working with industry partners to create Responsible AI governance, Credo AI shared insights designed to provide  practical updates to the EU AI Act, including the following:

Taxonomy of Actors: 

  • It is essential to point out that there is not always a clear distinction between the user and the provider. Many organizations develop AI/ML systems for their own internal use, blurring the line between provider and user. For example, a company may develop internal ML systems for credit risk prediction or fraud detection on customer transactions. 
  • In terms of determining liability, it is essential to consider whether someone has asked for proof that the system, model, or data was fair. Credo AI advocates for creating a regulatory framework that encourages all parties involved in the value chain to take responsibility for and be incentivized to ask for proof that systems are built fairly.
  • Dive in, see: The Open Loop report on the taxonomy of AI actors provides an alternative taxonomy of AI actors.

Risk Assessment: 

  • Credo AI believes that responsibility should be cultivated throughout the entire AI development life cycle. Hidden bias is a major reason why assessing AI systems for all dimensions of risk is always necessary. The lack of specificity on "known and foreseeable risks associated with each high-risk AI system" can create confusion in the industry. Providing more specific guidelines for identifying and analyzing AI risks is crucial, as different individuals may have varying levels of understanding and expertise in this area.
  • Check out the Open Loop report on Human Oversight, Transparency, Risk Management requirements in the AI Act and the report titled “Towards informed AI interactions” for more insights.

Transparency 

  • Transparency has two significant challenges: one is technical, and the other sociological:
  • The technical challenge is that many emerging AI/ML techniques, such as complex neural networks and large language models, are highly complex and challenging to explain. While it is possible to use explainable models instead of complex ones, there is often a trade-off between human-legible explanations of model behavior and model accuracy. This trade-off decision requires careful consideration and balancing of what is more important in a given situation. 
  • The sociological challenge is harder to solve and relates to the level of technical understanding and expertise expected of these humans, as well as the types of decisions they will make based on the model's transparency. 
  • There is also the question of how to evaluate the "effectiveness" or legibility of AI explanations. Research has shown that some explanations of model behavior can provide false confidence in the model's accuracy, even when the outcomes are incorrect. More research is needed to understand what constitutes a "good" explanation of AI behavior and avoid relying on explanations that do not actually increase our understanding of the system. Given the sociological challenges outlined above, it is also worth considering "linguistic equity" in AI decision-making.

What works, in our experience?

Overall, policymakers face the challenge of creating mandatory compliance checks for an industry that incentivizes the use of Responsible AI without hindering the flexibility essential to this field—hence, we believe that specific and targeted compliance checks are one way to incentivize Responsible AI throughout the AI value chain correctly. 

Importantly, we do not advocate for a one-size-fits-all approach. Evaluation of AI systems needs understanding the context of the use of such a system. At Credo AI, we agree that regulators should approach policy in a similar experimental and interactive manner as technology is developed.

In Conclusion

By participating in Open Loop, Credo AI had the opportunity to put  policy experimentation into  practice, helping to influence and shape the policy debate with startups from all over the world on an important piece of AI regulation: the EU AI Act. To learn more about the EU AI Act, please refer to our blog posts:

At Credo AI, we are here to help you prepare for compliance with the EU AI Act. Schedule a call today and discover how our Responsible AI Governance Platform can assist you in initiating your AI risk and compliance journey effectively!

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.