Regulatory

 Articles

What is the EU AI Act? Frequently asked questions, answered.

For businesses operating within any of the twenty-seven countries that make up the European Union, understanding and complying with the EU AI Act will be the key to successfully developing and deploying AI in Europe (avoiding penalties and actively contributing to the responsible deployment of AI worldwide). This factsheet is intended to answer some of the most common questions about the EU AI Act, providing essential insights to help businesses prepare for compliance and navigate the evolving landscape of AI regulation successfully.

Mastering AI Risks: Building Trustworthy Systems with the NIST AI Risk Management Framework (RMF) 1.0

To support the rapid growth of Artificial Intelligence adoption, the National Institute of Standards and Technology (NIST) spent a significant amount of time gathering stakeholder feedback from both the public and private sectors in order to publish a comprehensive NIST AI Risk Management Framework 1.0 (AI RMF) on January 26th, 2023. Two months later (on March 30, 2023), NIST also released a companion AI RMF Playbook for voluntary use – which suggests ways to navigate and use the AI Risk Management Framework (AI RMF) to incorporate trustworthiness considerations in the design, development, deployment, and use of AI systems.

How Businesses can Prepare for the EU AI Act: Including the Latest Discussions related to General Purpose AI

The European Parliament will vote to reach political agreement on the EU AIA on April 26th, and it is highly likely that the European Parliament’s latest version of the EU AIA text will include new provisions concerning General Purpose AI Systems (GPAIS), adding a new framework and safeguards which will place obligations on both GPAI providers and downstream developers. These new obligations will most likely include testing and technical documentation requirements, requiring GPAIS providers to test for safety, quality, and performance standards, and expectations for both GPAIS providers and GPAIS downstream developers be able to describe ways to understand the model in a more comprehensive way via technical documentation (the model must be safe and understandable). This documentation could be akin to the format known as “AI model cards,” and may be expected to include information on performance, cybersecurity, risk, quality, and safety.

NYC Releases Final Rules for Automated Employment Decision Systems (Effective July 5, 2023)

Today, the New York City Department of Consumer and Worker Protection (DCWP) released its Notice of Adoption of the Final Rules for Local Law 144, requiring employers and employment agencies to provide a bias audit of automated employment decision tools (AEDTs). The enforcement date for these rules has been delayed to July 5, 2023 (previously April 15, 2023).

Growing Pressure to Regulate AI: Proposed State Bills Call for Impact Assessments and Transparency

In recent weeks, there has been a significant increase in the number of AI-related state bills introduced across the United States.This reflects greater pressures to address AI and automated decision-making systems used in the government and private sector – and the potential risks they present. States have taken different approaches to fill the current gaps in regulation, including the development of task forces and the allocation of funding for research. Additionally, a number of bills have proposed measures aimed at increasing transparency around AI systems, including requirements for algorithmic impact assessments and registries/inventories of AI systems used. These transparency measures are growing in popularity as a regulatory tool to ensure that AI systems are trustworthy and safe, affecting developers and deployers of AI products, both in the private and public sectors.

NYC Bias Audit Law: Clock ticking for Employers and HR Talent Technology Vendors

On January 1, 2023, the New York City (NYC) Local Law 144, aka NYC bias audit law for automated employment decision tools, will go into effect. With only a few months left for organizations to be compliant, it is a good time to discuss the impact of this legislation and highlight the areas for improvement as the legislation starts to mature.

The Need for Comprehensive Technology Governance at the Intersection of Tools & Rules

Effective technology governance requires tools that understand what the technology is doing. This is especially true in the case of Artificial Intelligence (AI), where tools which explain and interpret what the AI is doing become critical.

Future-Proofing Automated Employment Decision Tool Use to Comply with AI Regulations

Over the past decade, many companies have adopted some form of automation for the hiring process by using what are now called Automated Employment Decision Tools (AEDT). The use of Artificial Intelligence (AI) algorithms in these AEDT has amplified our concerns about bias.

Partner with Credo AI

Are you ready to help shape AI with humanity in mind? We're calling all Regulators, Academics, Policy Makers, Auditors, Standard Setters, and others to join us.

Become a Partner