In this conversation with John Larson, EVP of AI at Booz Allen Hamilton, we explore one of the most pressing challenges in the AI landscape today: the pace of AI innovation is moving far faster than the ability of regulatory frameworks, governance structures, or institutions to keep up. Larson reflects on how dramatically AI capabilities have evolved in just a few years, and why that acceleration creates new governance pressures around autonomy, workflow integration, and the very real risks that emerge when AI systems begin making decisions on behalf of people and organizations. He explains that as enterprises shift from simple consumer-style AI tools to complex, multi-step agentic systems, they face difficult questions about autonomy, human involvement, and the safeguards needed when AI behaves more like a digital employee operating inside the business.
Larson challenges the industry to rethink “trust” in AI, favoring the idea of “justified confidence” grounded in rigorous evaluation, workflow-specific performance requirements, and transparent data about model behavior. He also offers pragmatic guidance for leaders navigating this moment of uncertainty and pressure: start with a few meaningful workflows, build real competency through testing and iteration, and strengthen the technical and organizational foundations before scaling AI across the enterprise. This discussion is a valuable look at how one of the world’s foremost AI strategists thinks about governance, readiness, and what it will take for organizations to deploy AI responsibly at scale.
Inventory your AI use cases and operationalize contextual AI governance across your entire enterprise in one scalable platform.
Learn more about the AI Governance Academy, and learn from the Credo AI team—pioneers in AI governance and Responsible AI.