AI Compliance

Understanding the White House Executive Order: Implications and Insights for Enterprises

Credo AI was present at the signing of the EO on October 30th, when President Biden stated, “The United States has an opportunity not only to lead with the power of its example but with an example of its power.

November 16, 2023
Author(s)
Evi Fuelle
Navrina Singh
Contributor(s)
Lucía Gamboa
Ehrik Aldana
Ian Eisenberg

Intro

Just before Halloween, the global AI community received quite a treat - in many ways, the equivalent of a king-size candy bar: “The White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”

The extensive 111-page Executive Order (EO), published on October 30th, 2023, establishes guidelines to make AI use safe, secure, and trustworthy. It contains 13 sections covering areas like safety, equity, innovation, privacy, and international dimensions. Following closely on its heels, the U.S. Office of Management and Budget published its supplemental draft implementation guidance, “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence,” on November 1st. 

This EO is the “bat-signal” that the global AI ecosystem is needed - it signals the U.S. government's intention to lead in responsibly developing AI, with almost every U.S. government department and agency tasked to develop Responsible AI practices within a compressed period. The EO places a strong emphasis on capabilities and risk evaluations, recognizing their foundational role in AI governance, which will accelerate research throughout the AI community on evaluations and benchmarks. The effects will cascade to many sectors of the economy and affect AI governance worldwide. 

Key transparency and evaluation implications (together with the Draft OMB Implementation Guidance) include:

  • Requirements for the development and implementation of rigorous testing, evaluations, and independent reviews to hold AI accountable and prevent discrimination or abuse;
  • Requirements for NIST to develop benchmarks for evaluating AI capabilities; 
  • Introduction of “AI tiering,” specifically using compute as a proxy for AI systems worthy of additional tracking and scrutiny;
  • Mandating minimum risk management practices for safety-impacting and rights-impacting AI in federal procurement and acquisition, including  impact assessments, pre- and post-deployment testing, evaluation, and mitigation of emerging risks to rights and safety; and 
  • Maintaining an inventory of powerful AI systems being developed, procured, and deployed.

Credo AI was present at the signing of the EO on October 30th, when President Biden stated, “The United States has an opportunity not only to lead with the power of its example but with an example of its power.” The Vice President and nearly every federal agency’s Secretary were present at the signing, as well as Senate Majority Leader Senator Chuck Schumer, representatives from the National Security Council (NSC), the National Economic Council (NEC), White House OSTP Director Arati Prabhakar,  Dr. Alondra Nelson, and many more key stakeholders. It was a concerted and impressive display of a government-wide commitment to focus on AI investment and ensure its safe and trustworthy development with real and implementable guardrails

More agency actions are expected in the months ahead, with the next steps involving federal agencies completing additional tasks within 270 days. By July 2024, many must develop guidance on using and procuring AI responsibly.


President Biden signing The White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

What does it mean for enterprises?

The impact of this EO on the private sector will be far-reaching. It sets concrete standards for private entities that contract with the federal government, federal agencies themselves, and “developers of the most powerful AI systems” (i.e., generative AI and dual-use foundation models) - which will inevitably force change across the private industry at large. The EO and OMB guidelines also create a standard for the privacy sector to evaluate tools they would like to procure. 

We are excited to see the emphasis on development standards for testing and evaluation of AI models broadly - not just for generative AI and “dual-use foundation models” - but for “new and existing AI that is developed, used, or procured by or on behalf of covered agencies.” Section 4, Section 7, Section 8, and Section 10 of the Executive Order have particular relevance to creating a robust AI assurance ecosystem in sectors such as procurement/public services, financial services, insurance and hiring. 

We are also excited to see the EO’s foray into “tiering” powerful AI systems for scrutiny. To deploy governance requirements while supporting continued innovation effectively a context-sensitive approach must be taken focused on AI systems with the greatest risk. Combining use-case information with proxies of model capabilities (as the EO does using compute) is a reasonable first step. However, we are concerned that the use-case callout (AI systems for biological sequences) is not sufficiently expansive to capture AI systems with the potential for extreme societal impact. We also note that compute is not a sufficient proxy for risky capabilities in the long term and believe that the EO’s tiering approach should quickly be complemented by more direct pre-deployment evaluations of capabilities as well as demonstrated societal impact as shown by real-world usage.

Many additional details on how agencies will establish compliance plans and implement risk mitigation strategies will be fleshed out in forthcoming guidelines from specific agencies, including risk-management practices and methods for agencies to track and assess their ability to adopt AI and manage risk. Some specific impacts for already heavily regulated sectors include: 

Federal Procurement/Public Services2

Standards for federal procurement will mandate responsible and transparent AI practices. First and foremost, U.S. Federal Agencies will each have dedicated Chief Artificial Intelligence Officers (CAIOs) responsible for ensuring that AI procured by the federal government is safe, secure, and trustworthy. While many more details on government procurement will be fleshed out in forthcoming guidelines from specific agencies, specifically by the Office of Management and Budget (OMB) and Office of Science and Technology Policy (OSTP), the OMB Draft Guidance (published for public review on November 1st) lays out specific minimum risk management practices for uses of AI that impact the rights and safety of the public. These requirements include two broad categories:

  • Before using new or existing safety-impacting or rights-impacting AI, agencies must:
  • complete an AI impact assessment, which documents: 
  • The intended purpose of AI and its expected benefit
  • The potential risks of using AI
  • The quality and appropriateness of relevant data
  • Test the AI for performance in a real-world context
  • Independently evaluate the AI (this must be done through the CAIO, an agency oversight board, or other appropriate agency office with existing test and evaluation responsibilities.

  • While using new or existing safety-impacting or rights-impacting AI, agencies must: 
  • Conduct ongoing monitoring and establish thresholds for periodic human review 
  • Mitigate emerging risks to rights and safety
  • Ensure adequate human training and assessment
  • Provide appropriate human consideration as part of decisions that pose a high risk to rights or safety 
  • Provide public notice and plain-language documents through the AI use case inventory 

Developers of Powerful Foundation Models3

Developers of foundation models, or organizations that procure large data centers that could be used to develop powerful foundation models, will face new reporting requirements, including model training plans, cybersecurity measures related to protecting model weights, outcomes of red-teaming exercises, and acquisition of large computing clusters. Special attention is placed on systems that may have advanced biological capabilities. The EO outlines a compute-based threshold for when a model is subject to these reporting requirements: 1026 FLOPS (floating-point operations) in general and 1023 for AI systems processing biological sequence data.

Financial Services4

Financial services will be impacted in numerous ways once regulations are proposed by relevant agencies. Implications for Financial Services include providing banking information to Infrastructure as a Service (IasS) providers about foreign resellers and persons, additional scrutiny over use of AI that may pose cybersecurity risks, preventing bias in underwriting models and use of credit information in tenant screening systems, and monitoring third-party AI services used.

Insurance5

As a regulated sector, insurance companies may face increased scrutiny in their underwriting processes with respect to housing insurance. Regulators are encouraged to evaluate models for bias and consider rulemaking for current third-party AI services used.

Hiring6

Most of the impacts on hiring and discrimination from the use of AI will be determined by guidance that the U.S. Department of Labor is required to publish within the next year (by 30 October 2024).

So, what happens next?

This EO is full of concrete deadlines over the next 270 days - beginning with some tasks directed to be executed as soon as 45 days after the publication of the Executive Order (by 14 December 2023). 

Within 60 days (29 December 2023), the Director of OMB will convene and chair an interagency council to coordinate the development and use of AI in federal agencies’ programs and operations. To provide guidance on the Federal Government's use of AI, within 150 days (21 March 2024), the Director of OMB, in coordination with the Director of OSTP, will issue guidance to agencies to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government. This includes the requirement to designate a Chief Artificial Intelligence Officer at each federal agency responsible for coordinating their agency’s use of AI, promoting AI innovation in their agency, and managing risks from their agency’s use of AI (carrying out the responsibilities described in the previous Executive Order 13960 from December 2020). 

Within 60 days of OMB and OST issuing the above guidance, the Director of OMB will develop a method for Federal agencies to track and assess their ability to adopt AI into their programs and operations, manage its risks, and comply with Federal AI policy. Furthermore, within 180 days of the above guidance, the Director of OMB shall develop an initial means to ensure that agency contracts for acquiring AI systems and services align with the guidance issued.

There are many additional efforts for additional agencies - such as the Department of Commerce standing up an institute and effort to develop red-team safety standards and efforts, as well as efforts to facilitate the development of watermarking and content provenance standards, as well as work for NIST to work with the Secretary of the Department of Energy and Secretary of Department of Homeland Security to: 

  • develop a companion resource to the AI Risk Management Framework for generative AI
  • develop a companion resource to the  Secure Software Development Framework to incorporate secure development practices for generative AI and for dual-use foundation models; and,
  • launch an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities; and, 
  • Developed and helped to ensure the availability of testing environments, such as testbeds, and supported the design, development, and deployment of associated Privacy Enhancing Technologies (PETs). 

Conclusion

CEO and Founder of Credo AI, Navrina Singh, and Global Policy Director at Credo AI, Evi Fuelle

There is a lot in this executive order. It is ambitious and comprehensive, but what it does have in breadth, in some areas it does lack depth. That said, it calls for the development of standards for watermarking, standards for content provenance, and standards for red-teaming - all of which are needed to underpin certifications, auditing, and assessments of AI models. 

As evidenced by the breadth of this EO, the White House Office and Science and Technology Policy conducted robust engagement with stakeholders across the spectrum of AI policymaking. There are references to safety, equity and civil rights, innovation and competition, privacy, consumer rights and protections, worker rights and labor unions, government use of AI, and an international dimension. 

Many stakeholders were involved in the process of developing this EO. However, what will determine whether or not this EO can be effective, and if regulatory capture can be avoided, is how different stakeholders are engaged in the work that this EO is tasking at each federal agency. Specifically, stakeholders that have been engaged in the research of auditing, benchmarking, evaluations, and assessments - beyond just foundation model and LLM developers - need to be involved in developing these standards. 

Credo AI is excited to see the emphasis on the development of standards for testing and evaluation of AI models and is particularly glad to see NIST launch an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities. We look forward to continuing to partner with the U.S. government as they continue to implement the actions set forth in this Executive Order, and we look forward to the positive impact this will have on the Responsible AI community at large.

1: Section 2 (b), PROPOSED MEMORANDUM FOR THE HEADS OF EXECUTIVE DEPARTMENTS AND AGENCIES, “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence,” https://www.whitehouse.gov/wp-content/uploads/2023/11/AI-in-Government-Memo-draft-for-public-review.pdf
2: 9.(a) Protecting Privacy; 10.1.(b) Providing Guidance for AI Management; OMB Guidance 4.d. Managing Risks in Federal Procurement of Artificial Intelligence
3:4.2.(a)(i) Ensuring Safe and Reliable AI ;  4.2.(a)(ii) Ensuring Safe and Reliable AI ; 4.2.(b)(i) Ensuring Safe and Reliable AI ; 4.2.(b)(ii) Ensuring Safe and Reliable AI ; 4.2.(c)(iii) Ensuring Safe and Reliable AI
4: 4.2.(d)(i) Ensuring Safe and Reliable AI ; 4.3.(a)(ii) Managing AI in Critical Infrastructure and in Cybersecurity ; 7.3. Strengthening AI and Civil Rights in the Broader Economy: 7.3.(b), 7.3.(c); 8.(a) Protecting Consumers, Patients, Passengers, and Students
5: 7.3.(b) Strengthening AI and Civil Rights in the Broader Economy; 8.(a) Protecting Consumers, Patients, Passengers, and Students
6: 7.3.  Strengthening AI and Civil Rights in the Broader Economy

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.