AI agents offer enterprises a significant opportunity to enhance productivity by executing multi-step work across systems, not just generating content. However, this shift also expands the governance surface area from managing model behavior to managing autonomous actions, permissions, and multi-agent interactions in real operational environments. Because many agent deployments are still emerging, organizations must align on what “agentic” means, how it changes risk, and what new controls are required before scaling. This article outlines seven governance considerations that teams should address to reduce operational, security, and compliance risk while capturing value.
—
The anticipated proliferation of AI agents offers the promise of significant value to organizations that adopt this new and transformative AI. It is easy to see how impactful agentic AI can be in a world of organizational complexity, and its potential to improve and automate organizational processes and workflows. This is especially true for areas with predictable and routine processes, which are most ripe to be transformed by AI agents.
On the other hand, for all the potential that agentic AI holds, it also introduces great levels of risk and uncertainty to organizations, many of whom have just started to get a firm grasp of AI governance for machine learning and generative AI. Best practices for governing AI agents are only now beginning to emerge, with many early adopters experiencing an all-too familiar feeling that they are ‘building the ship as they sail it’.
The purpose of this article is to outline key governance considerations that must be addressed before you set sail on your Agentic AI journey. Initially, we believe that the priority for organizations adopting Agentic AI should be in gaining alignment on the definition of Agentic AI, understanding how it changes the status quo, and acknowledging the novel risk and governance considerations it introduces. In another related article, we will provide a more tactical perspective on how to effectively navigate through Agentic AI challenges to effectively minimize risk and maximize value.
How Agentic AI Changes the Status Quo
The evolution from generative AI to AI Agents marks a major inflection point for enterprises. Up to this point, generative AI has been the most advanced AI technology that enterprises have needed to govern.GenAI deployments are often interactive and prompt-driven, and commonly produce recommendations or content. Agentic systems extend this by planning and executing actions via tools/workflows.
Although leveraging Gen AI and other models, Agents are fundamentally different from generative AI in that they are dynamic, autonomous, and goal-oriented by nature. Agents are designed to accomplish a stated goal by performing a series of actions in organizational environments. In this way, agents can perform tasks that were previously human-operated, executing steps across systems under defined permissions. Rather than only generating content, agents may take multi-step actions to reach a defined outcome (e.g., open a ticket, update a record, trigger a workflow), subject to controls. The implications of this shift are significant - very soon, AI will evolve from a passive business insights tool that has a relatively confined footprint to an autonomous entity that takes direct actions in real-world environments.
To summarize, AI Agents have the following characteristics:
- Agents are the next evolutionary level of AI, after generative AI
- Agents typically combine a foundation model (LLM or multimodal model) with orchestration software (tools, memory/state, policies, workflow logic, evaluators)
- Agents take actions in virtual and real-world environments
- Agents have capacity to autonomously pursue complex, directional goals
- Agents can engage in long-term and adaptive planning
Novel Governance Considerations for Agentic AI
New technologies such as Agentic AI inevitably bring new risk considerations for organizations to contend with. As organizations adopt Agentic AI, they will need to address the novel risks introduced by these autonomous, goal-oriented AI systems.
While in the early stages of adopting Agentic AI, organizations should keep in mind the following seven considerations for governing agents:
- AI Agents should initially be considered ‘high risk’ due to their autonomy & ability to take actions in real-world environments. Since agents have the ability to essentially function as proxies for employees, the impact they can have on organizations is much greater than what we’ve seen from generative AI and machine learning. This can potentially result in a productivity and innovation boon for organizations who get Agentic AI right. On the other hand, a compromised AI agent can potentially wreak havoc magnitudes greater than a static Gen AI or machine learning model, since it can directly take harmful actions with immediate reputational, financial, or operational repercussions. For example, an AI Agent used for customer service can erroneously approve or reject requested refunds or purchases. At least during their initial adoption of Agentic AI, organizations will be prudent to proactively identify and hedge against worst-case scenarios by making sure agentic AI projects follow governance requirements for high risk use cases.
- Agentic AI Adoption will be widespread and ubiquitous. Like Generative AI, we anticipate AI Agents will eventually become ubiquitously accessible and used by all employees. Over the past couple of years, the move from traditional machine learning to generative AI was met with growing pains by many organizations. Instead of AI being designated to a small number of highly skilled and sophisticated IT personnel, it was suddenly made available to all employees, resulting in oversight challenges that enterprises were forced to contend with. This trend (and the associated pain points) will likely persist with Agentic AI, where the technology is close to becoming a ‘no-code’ to develop and deploy by non-technical staff. From a governance standpoint, we foresee a future where ‘personal’ agents used by individuals will be subject to acceptable use policies, whereas ‘enterprise’ agents that are used for critical organizational processes will be tracked and monitored in an enterprise AI inventory.
- ‘Good’ data and information will no longer be a nice-to-have, but rather a tablesteaks requirement. Agents will need extensive access to an enterprise’s data to effectively perform multiple tasks and accomplish stated goals. If there are data integrity, quality, or alignment issues across systems, there is a risk that the agent will make errors based on the erroneous data or information that it references. If unchecked, this may also have the compounded effect of propagating bad data downstream to other systems, databases, or processes, further exacerbating the issue. This will put pressure on organizations to make sure their data and information governance and hygiene practices are adequate to enable rather than derail Agentic AI projects.
- Value measurement can draw upon operational excellence to evaluate the success of Agentic AI projects. To date, measuring the value of AI has been an unrealized aspiration for many organizations. For instance, recent reports show that enterprise value capture from using generative AI has been relatively dismal to date (see Forbes and MIT studies). Agents will introduce another obstacle for organizations seeking to effectively measure the value realized by AI projects. However, many organizations may have an unexpected leg up - since agents closely approximate employees, organizations can leverage their existing methods used to evaluate operational performance to evaluate AI agent performance.
- Agents’ autonomy and broad access makes them particularly susceptible to cybersecurity attacks. Agents’ broad access to data makes them particularly vulnerable to attacks like prompt injections, where hidden instructions can be fed to an agent so it performs adversarial actions internally or with external stakeholders. Additionally, if an Agent is compromised, it can have the potential for acting as a ‘master key’ to unlock an enormous amount of sensitive data to cyber criminals, resulting in a harmful data exfiltration event. Given this backdrop, the level and types of access provisioned to agents must be examined very carefully and aligned to the organizations security risk appetite.
- Agentic AI elevates the risk of a cascade event, in which downstream processes are adversely impacted. Especially in scenarios with multiple inter-operable agents, an issue with one agent is likely to cascade across tasks to other agents it works with, compounding and magnifying the impact of a risk event. This can cause an otherwise immaterial error to have a major impact on internal or external stakeholders, along with heightened financial, regulatory, or security risks, depending on the circumstances. Organizations will need to establish mechanisms for safe agent-to-agent interactions to avoid cascade events that would otherwise cause significant damage and potentially go unnoticed.
- Agentic AI changes how the enterprise conducts AI governance. The ubiquity and democratization of agents means the responsibility that individual employees will have in contributing to responsible AI will be greater than ever to Central AI governance teams will need to rethink their approach, and establish a more balanced shared responsibility model with agent owners. Also, Agentic AI will bring unique risks such as agentic sprawl and multi-agent coordination failure. As a consequence, the risk and control structures in an enterprise AI governance program will need to evolve to meet the Agentic world.
Conclusion
Agentic AI holds significant value for organizations that get it right. Now more than ever, organizations will need to go in with ‘eyes wide open’, giving equal attention to both its potential for unlocking value and its pitfalls that can introduce additional risk. Understanding the key considerations as outlined above will be a paramount first step to augmenting your AI governance capabilities to meet the challenge of the Agentic AI era.
To learn more about AI governance in the age of agents, stay tuned to Credo AI's blog series on Agents, where we will continue to provide perspectives and emerging best practices.
DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.






