AI Governance 101

OMB Guidance: How Federal Government Agencies Can Adopt AI Responsibly

In partnership with Booz Allen Hamilton, Credo AI announced its OMB offering at the Special Competitive Studies Project (SCSP) AI Expo on May 9, 2024.

May 9, 2024
Author(s)
Lucía Gamboa
Evi Fuelle
Contributor(s)
No items found.

As the adoption of AI accelerates across the U.S. Federal Government, ensuring compliance, transparency, and accountability has become paramount. Credo AI is here to help agencies meet the rigorous requirements in the U.S. Office of Management and Budget’s (OMB)  Memorandum on “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence”.

In partnership with Booz Allen Hamilton, Credo AI announced its OMB offering at the Special Competitive Studies Project (SCSP) AI Expo on May 9, 2024. Our solution will empower federal agencies to inventory their AI Systems, manage AI risk, provide human oversight, and help ensure compliance.


1. What is the OMB Guidance? 

OMB’s AI Memorandum M-24-10 (OMB guidance hereafter) is a key deliverable of President Biden’s Executive Order 14110 that directs U.S. Federal Government agencies to manage risks from the use of AI. The guidance is largely focused on managing risk and mitigating harms for AI that could impact human rights and safety. All new and existing AI that is developed, used, or procured by the 24 U.S. Federal Government agencies included in the Chief Financial Officers Act must meet specific governance requirements for safety-impacting and rights-impacting purposes. 

As outlined in Section 5(b) and Appendix I, AI is presumed to be safety-impacting or rights-impacting if it is used or expected to be used, in real-world conditions, to control or significantly influence the outcomes of any of the following agency activities or decisions:


2. What do Federal Government Agencies need to know? 

By December 1, 2024, U.S. Federal Government agencies must implement the minimum practices listed in Section 5(c) for safety-impacting and rights-impacting AI or stop using any AI that is not compliant with the minimum practices.  These minimum practices represent an initial baseline for managing risk from the use of AI. 

U.S. Federal Government agencies need to ensure minimum practices are met for safety or rights-impacting AI irrespective of whether the AI is developed in-house or procured. As outlined in Section 5(d), agencies will need to obtain adequate documentation (such as model, data, and system cards) to assess procured AI’s capabilities, its limitations, and understand the data being used. 

3. How can Credo AI help government agencies meet the OMB requirements? 

Credo AI team at SCSP AI Expo showcasing how our platform helps organizations adopt AI governance, risk, and compliance workflows.

Our solution will help federal agencies to:

  • Inventory AI Systems: Maintain an inventory of individual AI use cases and implement intelligent workflow automation.
  • Manage AI Risk: Access an AI-specific risk and control library, along with vendor risk management capabilities.
  • Ensure Compliance: Monitor adherence to  the OMB Guidance on “Advancing Governance, Innovation, and Risk Management for Agency Use of AI.”
  • Provide Human Oversight: Conduct ongoing human oversight per use case to identify and mitigate risks

Credo AI is committed to empowering responsible, impactful AI innovation for the U.S. Federal Government. The deadlines are fast approaching, and we stand ready to help agencies get prepared.

Request a demo to explore our tailored AI governance solutions for the public sector!

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.