“The EU AI Act is not merely a regulation. It’s an invitation to reimagine the role of AI in our businesses, placing responsibility, safety, and transparency at the core of innovation. As we take on this challenge, we’re also embracing an opportunity to create a more reliable, beneficial AI environment.” - Navrina Singh, Founder and CEO of Credo AI
On Wednesday, June 14th, the European Parliament overwhelmingly approved the EU AI Act, marking a significant step towards comprehensive Artificial Intelligence (AI) regulation worldwide. With 499 votes in favor, 28 against, and 93 abstentions, the decision marks a turning point in AI Governance as other nations look to follow the example set by the European Union - known as “the Brussels Effect” - in addressing AI-specific risks and preserving fundamental human rights while continuing to innovate responsibly with AI.
Now, the EU AI Act enters the final phase of the EU legislative process, and Spain will take the helm of the Council Presidency to lead the trilogue negotiations between the European Parliament, Commission, and Council. A final agreement on the EU AI Act is expected to be reached over the next six months (by December 2023).
For a comprehensive review of the most frequently asked questions regarding the EU AI Act – including what it is, what it stands for, who falls under its scope, the main obligations for enterprises, and the recent updates regarding Generative Purpose AI – please refer to our blog post: “What is the EU AI Act? Frequently asked questions, answered.”
As of today, June 16th, 2023, the adopted rules include an expanded ban on prohibited AI practices and clarified rules for generative AI systems. Unacceptable uses of AI systems have been expanded to include bans on intrusive and discriminatory applications of AI, such as predictive policing and emotion recognition systems. Additionally, systems that pose risks to people's health, safety, fundamental rights, or the environment will now be classified as high risk. This classification also includes AI systems used to influence voters.
In regard to the rules for generative AI - foundation model providers will be required to perform a risk assessment and register their model to an EU database before release, while Generative AI systems built atop foundation models will be required to disclose that a) content is AI-generated, b) prevent the illegal content generation, and c) share information about copyrighted data used in model training.
EU AI Act Implications for Enterprises
For businesses operating within any of the twenty-seven countries that comprise the European Union, understanding and adhering to the EU AI Act will be crucial for successfully developing and deploying AI in Europe. Establishing governance processes, building competence, and implementing technology will be vital. Efficient compliance will require time and effort, so organizations should utilize this time to prepare.
Some of the responsibilities of businesses under the EU AI Act include:
- Conducting a risk assessment to determine the level of risk associated with their AI system.
- Ensuring compliance of their AI system with the specific requirements based on its level of risk.
- Providing transparency and disclosure about their AI system as required.
Credo AI is committed to helping organizations - from large enterprises to startups - comply with the EU AI Act comprehensively, economically & efficiently through our Responsible AI Governance Platform. Learn how we can support your organization with our EU AI Act Primer, and contact us today to start preparing for the EU AI Act. In the meantime, take our quick survey to find out what parts of the Act apply to you based on where and how you're using AI.
AI Governance is Here to Stay.
As the EU AI Act secures its place in the legislative realm, it becomes apparent that the European Union is not alone in shaping the future of artificial intelligence policy. Nations worldwide are making strides in their own approaches to AI governance.
Efforts to regulate AI occur at every level of government in the United States (local, state, and federal). The United States Federal Government has already developed the White House AI Bill of Rights, as well as the NIST AI Risk Management Framework and Practice Guide for companies, to help set voluntary guidelines that offer a structure for the industry to map, measure, manage, and govern AI risks.
Congress has introduced a number of new bills intended to mandate varying levels of AI disclosure and transparency reporting, including discussions of the creation of a new federal agency to regulate AI, in addition to re-introducing bills that require accountability throughout the “digital stack” for tech companies. At the global level, the European Union’s efforts to regulate AI extend beyond just the EU AI Act, also including the EU AI Liability Directive and the Digital Services Act (which includes an annual risk assessment for companies designated as “Very Large Online Platforms'' or VLOPS).
There is ongoing work to determine how to regulate AI in Canada; Canada was one of the first countries to issue a prototype Algorithmic Impact Assessment and is now considering legislation known as “Bill C-27.” The United Kingdom has taken a hard look at how to empower existing federal agencies with the tools and resources they need to regulate AI as part of its National AI Strategy, as opposed to creating a new “AI Office” or AI regulatory body, a distinct approach from the EU. Singapore’s AI Model Governance Framework and Implementation and Self-Assessment Guide for Organizations (ISAGO) is also quite practical, as it attempts to translate ethical principles into pragmatic measures for companies. Singapore’s AI Verify project continues to pick up steam as they help to set global standards on AI and develop the AI assurance and standards ecosystem.
NATO published its Principles of Responsible Use, which include “Lawfulness, Responsibility and Accountability, Explainability and Traceability, Reliability, Governability, and Bias Mitigation,” and plans to iterate further on how companies can implement these principles in practice using their Data and Artificial Intelligence Review Board, launched in October of last year, which includes the engagement of all NATO Allies.
Last but certainly not least, International Standards Bodies, including ETSI, IEEE, ISO, CEN/CENELEC, and more, continue their work to create the standards that will underpin AI legislation and regulation globally.
In this urgent and inevitable context, Credo AI emerges as an active and vocal participant, stretching across borders and engaging with policymakers and regulators at various levels.
AI Governance is a critical endeavor that cannot wait, and proactively adopting governance can position companies to benefit from AI opportunities while minimizing risks, ensuring they are ready for new regulations and standards that will inevitably arise.
Credo AI remains steadfast in its commitment to supporting organizations worldwide, ensuring that AI values align with beneficial outcomes for humanity and businesses alike. As we navigate this interconnected world, where AI technologies transcend borders, our active engagement in shaping AI policy and assisting enterprises in adopting ethical standards through our Responsible AI Governance Platform continues to be our driving force.
Whether it's complying with emerging global regulations like the EU AI Act, ensuring the adoption of safe Generative AI, or implementing standards such as the NIST AI Risk Management Framework, we are dedicated to equipping businesses across the globe with the tools and knowledge to navigate the evolving AI landscape with speed and intention.
At Credo AI, we firmly believe that responsible AI governance is the key to unlocking the full potential of artificial intelligence. By upholding the highest standards and policies, we strive to create a future where AI is harnessed for the greater good. Join us in shaping a world where technology continues to serve humanity!