At Credo AI, we commend the White House for delivering a comprehensive AI Action Plan that recognizes a fundamental truth: America will win the AI race not just through raw computational power, but through trust. We are heartened to see our recommendations on standards development, AI evaluation ecosystems, open source innovation, AI value chain governance, increased trustworthy AI use in government and the central focus on trust reflected throughout the final plan.
After a week on the ground in Washington, DC, engaging with critical partners and customers following the AI Action Plan release, it's clear that both administration leadership and American businesses understand that trust and governance remain imperative to "winning the AI race."
The Trust Deficit: The Hidden AI Challenge
The Action Plan's most striking insight isn't just about technology—it's about adoption:
"Today, the bottleneck to harnessing AI's full potential is not necessarily the availability of models, tools, or applications. Rather, it is the limited and slow adoption of AI, particularly within large, established organizations. Many of America's most critical sectors, such as healthcare, are especially slow to adopt due to a variety of factors, including distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards."
This assessment validates what Credo AI has long advocated: the nations that solve the trust problem first will dominate the AI era. According to data from the Edelman Trust Barometer, in the US, only 32% of adults state that they trust AI technology, compared to 50-77% in APAC countries. The AI Action Plan positions trust not as a nice-to-have feature, but as an essential requirement for America’s AI leadership.
Trust as National AI Strategy
By embedding trust requirements throughout the Action Plan—from secure-by-design mandates to AI evaluation ecosystems—the White House has made trust a core component of American AI competitiveness.
Nations and enterprises that can't trust their AI systems can't fully deploy them in high-stakes applications. By leading in trustworthy AI, America creates sustainable competitive moats that raw computational advantages alone cannot provide.
AI Evaluation Ecosystems: The plan directs DOD, DOE, CAISI, DHS, NSF, and academic partners to coordinate systematic AI testing for "transparency, effectiveness, use control, and security vulnerabilities." This represents an actionable commitment to robust AI evaluation.
Secure-by-Design Standards: Federal requirements that safety-critical and homeland security AI applications use "secure-by-design, robust and resilient AI systems" establish trust as a baseline requirement, not an optional enhancement.
Open-Source with Governance: The plan's support for open-source and open-weight models, coupled with governance requirements, demonstrates how innovation and oversight can accelerate together rather than compete.
Standards-Driven Innovation: The plan's emphasis on building international consensus for governance standards reflects our recommendations for governance standards that accelerate rather than hinder adoption. We're proud to see an approach that treats standards as enablers, not barriers.
Sector-Specific Governance: The plan's recognition that "a one-size-fits-all approach won't work for AI" reflects our recommendations for contextual, domain-specific governance frameworks. This nuanced approach enables rapid deployment while maintaining appropriate oversight.
Why Trust Creates Competitive Advantage and Accelerates AI adoption
The Action Plan's strengths lie in recognizing that a lack of trust becomes exponentially more constraining as AI systems scale. Organizations that can't trust their AI systems face fundamental barriers:
- High-stakes applications: Critical sectors like healthcare, finance, and national security require AI systems that can be trusted with consequential decisions
- Enterprise scaling: Organizations need confidence to deploy AI across their entire operation, not just pilot projects
- Partnership requirements: Collaboration with other organizations demands AI systems that all parties can trust
- Market access: Regulated industries increasingly require demonstrated trustworthiness as a prerequisite for AI deployment
Trustworthy AI systems can be deployed more broadly, scaled more aggressively, and integrated more deeply into critical operations.
The Federal Implementation Strategy
The Action Plan creates opportunities for trustworthy AI adoption across government:
Chief AI Officer Council: Formalizing this body as "the primary venue for interagency coordination" establishes governance leadership throughout federal agencies. CAIOs will become essential as the government’s risk management function, yet their ability to develop comprehensive agency playbooks may face constraints such as budgets and resource capacity.
AI Procurement Transformation: The Plan’s GSA-led Procurement Toolbox is where trust will be tested. Federal AI adoption won’t be able to scale long term if governance, including privacy, data security, and transparency, is not baked into federal contracts.
Evaluation Ecosystem: Federal investment in AI testbeds, evaluation frameworks, and measurement science creates the foundation for systematic trust-building across the AI ecosystem. With its promotion of open source, federal research initiatives should prioritize scientific research for open source governance to protect national security.
Workforce Development: Emphasizing AI literacy and skills development ensures that Americans can work confidently alongside AI systems, further accelerating adoption. For federal workers, this upskilling will also continue to be key. Building continuity in AI governance taxonomy will enable progress of AI risk management across agency functions, rather than creating more opportunities for fragmentation.
International Leadership Through Trust
The Action Plan positions America to lead globally by establishing trustworthy AI as the gold standard:
Standards Export: By developing robust, practical governance frameworks, America can create attractive models for international adoption. Nations seeking AI development will naturally gravitate toward proven, trustworthy approaches.
Trust as a Layer in the American AI Tech Stack: American AI systems that can demonstrate trustworthiness have sustainable advantages in international markets, particularly among partners seeking reliable, transparent technology solutions. Governance embedded within AI systems development can differentiate American-made AI technologies from global competitors.
What Does the Action Plan Mean for Enterprise & Government Chief AI Officers (CAIOs) and How Can Credo AI Help?
While the AI Action Plan offers a strong blueprint for what’s critical to succeed with AI, the real impact will be determined by how effectively it is implemented. For example, while the plan rightly emphasizes trust and raises key questions around building trustworthy AI systems at scale, the answer lies in robust governance frameworks that span the entire AI lifecycle—and Credo AI is here to enable that transformation.
At Credo AI, we address the trust challenges outlined in the Action Plan through our comprehensive AI governance platform Our approach aligns directly with the White House's vision of trustworthy AI systems, offering integrated capabilities that address the specific requirements outlined in the Action Plan.
AI Discovery: Comprehensive visibility into AI usage across organizations is fundamental to trust-building. Credo AI provides complete inventory management for models, datasets, use cases, and agents, ensuring organizations know exactly what AI systems they're deploying and how they're being used. CAIOs will require this visibility which is essential for the systematic evaluation that the Action Plan mandates.
Open Source Governance: The Action Plan's encouragement of open-source and open-weight models requires unique governance. Credo AI provides critical capabilities for managing open-source and open-weight model adoption while enabling trust across the AI ecosystem. This includes tracking model lineage, understanding licensing implications, and ensuring that open models meet organizational security and performance standards.
Third party procurement and AI Supply Chain Governance : CAIOs will need to have greater control on their AI supply chain including third party AI vendors. Credo AI helps organizations manage risks across their entire AI supply chain. This addresses the control and oversight challenges of modern AI development that the Action Plan explicitly recognizes as barriers to adoption.
Integrated AI and Data Stack: The AI Action Plan calls for CAIOs to enable rapid, secure AI adoption. The Credo AI governance platform serves as the single system of record for governance and trust across fragmented AI and data tools. This integration enables enterprises and federal agencies to adopt the latest AI capabilities while protecting critical capabilities from regulatory and operational risk.
Regulatory and Standards Compliance: The AI Action Plan recommends a revision of the AI RMF and new, CAISI-led standards development. For enterprises, Credo AI future-proofs AI investments against an uncertain and fragmented regulatory landscape. With the largest repository of codified regulations, identified AI risks and mitigations, and proven implementation experience, organizations can focus on running their business while we manage governance and build trust.
Workforce AI Enablement: Supporting the Action Plan's "worker-first AI agenda," our forward-deployed AI governance experts provide direct advisory support, helping organizations develop the AI literacy and governance capabilities their workforce needs to work confidently with AI systems.
The Path Forward
We're at a key inflection point where the foundational approaches to AI governance will shape technological AI innovation for decades.The Action Plan reinforces it: governance and trust are not peripheral—they are core to the AI tech stack that will define America's leadership.
Credo AI is committed to working with the administration and federal agencies to bridge existing gaps in trust, providing flexible, actionable policy frameworks that ensure secure, transparent, and accountable AI development. We look forward to continued engagement with the administration on this critical initiative, securing America’s enduring AI leadership through trusted AI.
The White House AI Action Plan reinforces trust as the foundation for American AI leadership. At Credo AI, we're committed to supporting this vision through governance solutions that enable confident AI adoption across government and enterprise. The future belongs to those who can deploy AI systems that people, organizations, and nations can truly trust.
For those looking to turn trust into their competitive advantage, contact us to start your AI Governance journey and accelerate your AI transformation.
DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.