Regulatory

Growing Pressure to Regulate AI: Proposed State Bills Call for Impact Assessments and Transparency

Ehrik Aldana
Tech Policy Product Manager
March 23, 2023
3/23/2023
Contributor(s):
No items found.

In recent weeks, there has been a significant increase in the number of AI-related state bills introduced across the United States.

This reflects greater pressures to address AI and automated decision-making systems used in the government and private sector – and the potential risks they present.

States have taken different approaches to fill the current gaps in regulation, including the development of task forces and the allocation of funding for research.

Additionally, a number of bills have proposed measures aimed at increasing transparency around AI systems, including requirements for algorithmic impact assessments and registries/inventories of AI systems used.

These transparency measures are growing in popularity as a regulatory tool to ensure that AI systems are trustworthy and safe, affecting developers and deployers of AI products, both in the private and public sectors.

AI Transparency Bills at a Glance


Legislation State Summary
AB 331: Automated Decision Tools (proposed) California Requires developers and deployers of automated decision tools provide impact assessments.
Prohibits automated decision tools that contribute to algorithmic discrimination.
SB 1103: AI, Automated Decision-Making, and Personal Privacy (proposed) Connecticut Mandates the inventory and testing of state-used algorithms.
A03308: Digital Fairness Act (proposed) New York Requires governmental agencies and nonprofits wishing to use/procure/access information from an automated decision system to engage a neutral third party to conduct a publicly published impact assessment.
HB 49: Artificial Intelligence Registry (proposed) Pennsylvania Creates a state registry of “businesses operating artificial intelligence systems”
H 410: Use and Oversight of AI in State Government (enacted) Vermont Requires the state to conduct an inventory of all automated decision systems being developed, used, or procured by the state.
HB 2060: AI Advisory Council and Automated Decision Systems Inventory (proposed) Texas Requires state agencies using automated decision making systems to submit an inventory report of all systems being developed, used, or procured.
SB 5356: Government Procurement and Use of Automated Decision Systems (proposed) Washington Requires state agencies to conduct impact assessments on automated decision systems procured by them, with ongoing monitoring or auditing. Creates public inventory of all algorithmic accountability reports on automated decision systems that have been proposed for or are being used, developed, or procured by public agencies.

Impact Assessments

Similar to the European Union’s General Data Protection Regulation (GDPR) use of mandating data protection impact assessments (DPIA) to address the risks associated with data collection and processing, regulators are proposing the use of algorithmic or AI impact assessments to mitigate potential biases, discrimination, and other adverse consequences associated with the use of AI and algorithmic systems.

For example, California AB 331 requires developers and deployers of an AI system to complete and document an impact assessment, include the following elements at a minimum:

‎‎ ‎‎‎1. A statement of the purpose of the automated decision tool and its intended benefits uses, and deployment contexts.

2. A description of the automated decision tool’s outputs and how they are used to make or be a controlling factor in making a consequential decision.

3. A summary of the type of data collected from natural persons and processed by the automated decision tool when it is used to make, or be a controlling factor in making, a consequential decision.

4. An analysis of a potential adverse impact on the basis of sex, race, or ethnicity that arises from the use of the automated decision tool.

5. A description of the measures taken by the developer to mitigate the risk known to the developer of algorithmic discrimination arising from the use of the automated decision tool.

6. A description of how the automated decision tool can be used by a natural person, or monitored when it is used, to make, or be a controlling factor in making, a consequential decision.

These impact assessments are required to be provided to the California Civil Rights Department with fines of up to $10,000 for failing to produce the document.

Additionally, New York’s proposed Digital Fairness Act would require that any state or nonprofit entity using an automated decision system conduct and publicly publish an impact assessment that includes:

  1. A detailed description of the automated decision system, its design, its training, its data, and its purpose;
  1. An assessment of the relative benefits and costs of the automated decision system in light of its purpose, taking into account relevant factors, including data minimization practices, the duration for which personal information and the results of the automated decision system are stored, what information about the automated decision system are available to the public, and the recipients of the results of the automated decision system;
  1. An assessment of the risk of harm posed by the automated decision system and the risk that such automated decision system may result in or contribute to inaccurate, unfair, biased, or discriminatory decisions impacting individuals; and
  1. The measures the state agency will employ to minimize the risks, including technological and physical safeguards.

As AI technologies continue to develop and become more integrated into various sectors, it is expected that more governments and regulatory bodies will introduce requirements for algorithmic impact assessments or similar transparency reports.

AI Inventories

Following in the footsteps of the White House’s Executive Order (EO) 13960, states such as Connecticut, Pennsylvania, Texas, and Washington have proposed that state agencies inventory their AI use cases – including those that were procured from private companies.

Last year, Vermont already passed a law requiring an inventory of automated decision systems used or procured by the state. For each automated decision system, the inventory includes, among other things:

  1. The automated decision system’s name and vendor;
  1. A description of the automated decision system’s general capabilities, including: (A) reasonably foreseeable capabilities outside the scope of the agency’s proposed use; and (B) whether the automated decision system is used or may be used for independent decision-making powers and the impact of those decisions on Vermont residents;
  1. The type or types of data inputs that the technology uses; how that data is generated, collected, and processed; and the type or types of data the automated decision system is reasonably likely to generate;
  1. Whether the automated decision system has been tested for bias by an independent third party, has a known bias, or is untested for bias; and
  1. A description of the purpose and proposed use of the automated decision system, including: (A) what decision or decisions it will be used to make or support; (B) whether it is an automated final decision system or automated support decision system; and (C) its intended benefits, including any data or research relevant to the outcome of those results.

While the Vermont law is focused only on automated decision systems used by the state, Pennsylvania has proposed HB 49, which would create a similar registry of “businesses operating artificial intelligence systems” in the state detailing basic information about the business and “the intent of the software being utilized.”

With the emergence of these registries, it becomes more imperative for businesses to maintain an accurate and comprehensive inventory of their AI and automated decision-making systems.

Are Your AI Systems Compliant?

Emerging transparency requirements will affect how governments and businesses develop and deploy AI and automated decision-making systems. Preparing for these now is key to using AI responsibly and guaranteeing compliance. Learn more about how Credo AI can support your organization by scheduling a demo today at demo@credo.ai.

You may also like

🔥 Unlocking AI's Full Potential: Playbooks for Organizational Change Management in Responsible AI

Leveraging the capabilities of Artificial Intelligence (AI), including Generative AI, has become a key focus for executive teams across various industries worldwide. With tools such as ChatGPT 4 or DALL·E 2, AI is no longer a technology reserved for a select few in the future. Instead, it is here today, offering great benefits to both individuals and businesses. By streamlining operations, improving customer experiences, and boosting productivity, AI can provide a remarkable competitive edge to companies.However, just like with any rapidly advancing new technology, there are significant challenges that need to be addressed. Concerns such as AI bias, misuse, and misalignment have caused customer outrage and prompted companies to implement stricter guidelines for the development and use of AI.Moving fast with AI is critical.But breaking things can lead to more problems than benefits. At Credo AI, one of our missions is to empower organizations to harness the full potential of AI with intention, so they can reap the benefits without compromising speed. This means supporting organizations to move fast without breaking things.To support organizations in this endeavor, we have created the ultimate resource for effective, Responsible AI change management: our Responsible AI Playbooks (5 of many more to come! 😉). Regardless of the stage of AI maturity, our comprehensive guide provides practical guidance for organizations and employees to adopt and implement Responsible AI with ease, allowing them to quickly leverage and unlock the benefits of AI without risking any potential damage in the process. Are you ready to realize the full potential of AI? We are! Let’s start!

Tools for Transparency: What Makes a Trustworthy AI System? | MozFest 2023

In recent years, the development and use of artificial intelligence (AI) systems have skyrocketed, leading to an urgent need for accountability and transparency in the industry. To shed light on this topic, Ehrik Aldana, Tech Policy Product Manager at Credo AI, was invited to give a lightning talk at Mozilla Fest 2023.

Local Law No. 144: NYC Employers & Vendors Prepare for AI Bias Audit with Credo AI’s Responsible AI Governance Platform

The clock is ticking! With New York City Local Law No. 144’s (LL-144) enforcement deadline fast approaching (April 15th, 2023), companies are scrambling to ensure they comply with the new AI regulation. While some organizations are still unsure how to start their journey, others—like AdeptID—have already taken the lead to demonstrate their commitment to Responsible AI practices. In this blog post, we will briefly describe Local Law No. 144, share how Credo AI is supporting HR Employers and Vendors, and showcase how we have supported AdeptID in their efforts to adhere to the legal requirements established by LL-144.

Join the movement to make
Responsible Al a reality