The recent wave of AI adoption mandates in the United States has driven more enterprises and organizations to rapidly adopt AI, while the federal posture on deregulation increases pressures on those same enterprises to make their own decisions about good business partners and trusted AI development practices.
While the federal government considers potential legislation to broadly curb state-level regulation on AI (i.e. the text of the proposed “One Big Beautiful Bill Act” [H.R. 1] includes a “moratorium” on state-level AI legislation), U.S. states continue to pass and enforce state-level legislation related to AI. These legislative approaches at the state level range from mandated genAI disclosures and deepfake prohibitions to comprehensive AI risk management practices.
Now more than ever, enterprises are seeking to better understand the emerging patchwork of U.S. state regulations, which have rapidly proliferated in the last few years.
To date:
- 10 states introduced cross-sector AI legislation in 2025, creating urgent compliance challenges for enterprises and organizations operating across state lines.
- 45 states, (including Puerto Rico, the Virgin Islands and Washington, D.C.) introduced AI bills, according to the National Conference of State Legislatures, in the 2024 legislative session.
- 31 states (including Puerto Rico and the Virgin Islands) adopted resolutions or enacted legislation related to AI.
These proposals ranged in scope from narrowly defined policies to more comprehensive horizontal regulations.
Notably for private sector use of AI, nine states (Connecticut, California, New York, Massachusetts, Nebraska, New Mexico, Illinois, Maryland, and Oklahoma) are advancing cross-sector bills applicable to enterprises take a risk based approach to preventing algorithmic discrimination and ensuring responsible use of high-risk AI systems (Virginia also introduced similar legislation, although it was vetoed in March 2025). This surge in state-level activity signals the growing national focus on establishing clear rules for AI deployment.
While these state laws share core principles of trusted AI governance, they differ in terminology, scope, and implementation requirements. For businesses operating across multiple states, this emerging regulatory patchwork demands a proactive and coordinated approach to comprehensive AI governance.
Let's examine the current landscape of state-level AI regulation to date, and how Credo AI helps organizations navigate these complex compliance demands.
AI Governance State Legislation
As states take initiative in the absence of comprehensive federal regulation, two distinct regulatory approaches have emerged: cross-sector frameworks and domain-specific protections.
Cross-Sector AI Regulation
Cross-sector bills establish broad governance frameworks that apply across multiple industries rather than targeting specific sectors. These "horizontal" regulations create consistent baseline requirements for high-risk AI systems regardless of application domain—similar to the approach taken by the EU AI Act. They typically include:
- Risk classification systems that categorize AI applications based on potential harm
- Mandatory impact assessments for high-risk systems
- Transparency and disclosure requirements
- Anti-discrimination protections
- Enforcement mechanisms and penalties
The following states have introduced comprehensive cross-sector bills focused on consumer protection and high-risk AI systems:
- Connecticut: An Act Concerning Artificial Intelligence (SB 2) - Requires impact assessments and transparency measures for high-risk AI systems.
- California: California AI Transparency Act (SB 420) - Mandates disclosure requirements and regular auditing for AI systems affecting consumers.
- New York: AI Consumer Protection Act (A 768/S 1962) - Establishes consumer rights, including explanation of AI-driven decisions and opt-out provisions.
- Massachusetts: An Act Protecting Consumers in Interactions with Artificial Intelligence Systems (HD4053) - Focuses on transparency in consumer-facing AI and prohibits deceptive practices.
- Nebraska: Artificial Intelligence Consumer Protection Act - Creates guardrails for consumer data usage in AI training and deployment.
- New Mexico: Artificial Intelligence Act (HB 60) - Requires impact assessments and establishes an AI advisory committee.
- Illinois: Preventing Algorithmic Discrimination Act (SB 2203) - Specifically targets bias detection and mitigation in algorithmic systems.
Domain-Specific AI Regulation
In addition to cross-sector, risk-based bills, states are also advancing AI regulation focused on domain-specific protections including:
Rights & Protections
- New York
- AI Bill of Rights (NY A3265) - Establishes comprehensive resident protections for all AI-driven decisions.
- Healthcare AI Oversight (NY A3991) - Implements strict protocols for AI use in medical resource allocation and utilization management.
- Employment Decision Tools (NY A3779/S185) - Requires bias audits and prohibits certain uses of automated employment screening tools.
Advanced AI & Data Protection
- California
- Artificial Intelligence: Frontier Models (SB 53) - Creates first-of-its-kind safeguards for large-scale foundation models with potential societal impacts.
- High-Risk AI Systems: Duty to Protect Personal Information (SB 468) - Establishes explicit liability framework for data security failures in high-risk AI.
Workplace & Employment
- New Jersey: Independent Bias Auditing for Automated Employment Decision Tools (SB 2964/AB 3855) - Mandates third-party validation of hiring algorithms.
- Massachusetts: An Act Fostering Artificial Intelligence Responsibility (SD 838) - Sets boundaries for workplace monitoring through AI systems.
Government Use
- Texas: Responsible AI Governance Act (HB 149) - Establishes standards specifically for AI procurement and deployment by state agencies.
- Kentucky: AN Act relating to protection of information and declaring an emergency (SB 4) - enacted on March 31, 2025, requires the Commonwealth Office of Technology to implement standards for responsible AI use across the Kentucky state government.
The Credo AI Approach
Given this regulatory landscape, organizations need tools that make compliance scalable and sustainable. Credo AI solves for this complexity with an updated Control Library that identifies commonalities across state regulations. This enables enterprises to focus on shared requirements and build a strong AI governance foundation that is both comprehensive and adaptable.
Flexible Controls Built for Enterprise Use
Credo AI’s consolidated Control Library (within the Credo AI Software Platform) is designed with adaptability in mind. Each control strikes a balance between specificity and flexibility, ensuring:
- General applicability across sectors
- Multi-purpose coverage for overlapping regulations
- Clearly defined objectives and outcomes
- Practical implementation pathways
Policy Packs Tailored to State Laws
To help organizations respond quickly to enacted AI regulation, Credo AI offers out-of-the-box Policy Packs that translate regulatory requirements into clear, actionable governance practices.
Policy requirements within Policy Packs are mapped to our Control Library making it easier for teams to understand, operationalize, and provide evidence to demonstrate compliance. For example, we support state-level laws with dedicated policy packs that are mapped to our Control Library including:
- Colorado AI Act is split into two Policy Packs for both developers and deployers, this law’s requirements are mapped to:
- 8 deployer policy requirements, mapped to 10 Credo AI controls
- 4 developer policy requirements, mapped to 5 Credo AI controls

Explore our visualization to see how these interact.
- Utah’s Artificial Intelligence Policy Act Policy Pack has one policy requirement that corresponds to the law’s generative AI disclosure requirements associated with a Credo AI control that also corresponds to requirements for Colorado’s deployers and developers, and EU deployers.
This approach means that as new regulations come into effect, we are able to layer on new requirements reducing the burden of duplicative documentation and enabling reuse of evidence across multiple frameworks.
Credo AI Meets the Moment
As AI regulation accelerates, organizations need agile governance strategies that ensure compliance while enabling innovation. Credo AI empowers enterprises to:
- Stay current with evolving laws
- Map complex policy requirements to a streamlined set of controls
- Implement best-in-class risk mitigation aligned to policy
- Provide technical evidence that meets both risk and compliance requirements
- Reuse evidence and impact assessments across jurisdictions
By rapidly translating new regulations into Policy Packs and linking them to our Control Library, we help organizations scale compliance using their existing governance artifacts—no need to start from scratch.
Ready to Future-Proof Your AI Governance?
Connect with Credo AI today to learn how our AI Governance Platform and Advisory Services can help you meet emerging policy requirements and build trusted, resilient AI systems.
DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.