AI Governance

What Is AI Governance and Why Should You Care?

There is a lack of consensus around what AI Governance actually entails. We’d like to cut through the noise and provide a definition of AI Governance rooted in Credo AI’s experience working with organizations across different industries and sectors, collaborating with policymakers and standard-setting bodies worldwide, and supporting various global 2000 customers to deliver responsible AI at scale.

November 10, 2022
Author(s)
Susannah Shattuck
Contributor(s)
Ian Eisenberg
Catharina Doria

Part #1: What is AI Governance? This is the first blog post in our AI Governance series, where we'll share our ideas on how to ensure AI-Systems are built, deployed, used, and managed in service of humanity. 

Retrieved from Stable Diffusion (AI Generative Model) with the prompt “a robot winning Jeopardy!”

The last decade has been the decade of AI.

From IBM Watson’s victory over Ken Jennings on Jeopardy! in 2011 to generative AI winning art contests and convincing people they are sentient, it’s been a wild ride. Underneath those headline-grabbing moments has been another development—AI systems are now becoming the technological infrastructure of the future. From the phones in our pockets to the manufacturing systems that made them, from product recommendations to product design, AI systems are now embedded in almost every part of our world. And with the increasing pervasiveness of AI, there has also been a growing awareness of the unique risks that these systems pose—not only to the businesses that build and use them but, more existentially, to human values and flourishing.

And so, a parallel movement was born: the Responsible AI movement. Initially led by a small group of technologists who saw firsthand that AI systems could fail in unexpected and harmful ways, this movement—striving to align AI with human values—has now grown to become a major force in the policy and business ecosystems. “Responsible AI” and “AI governance” are now buzzwords of the moment on the strategic agenda of policymakers and executives.

There is, however, a lack of consensus around what these phrases actually mean—what does AI Governance actually entail? We’d like to cut through the noise and provide a definition of AI Governance rooted in Credo AI’s experience working with organizations across different industries and sectors, collaborating with policymakers and standard-setting bodies worldwide, and supporting various global 2000 customers to deliver responsible AI at scale.

Here's our take on AI governance and what we've seen is needed to effectively map, measure, and mitigate AI risk.

Governance keeps the plane in the air.

Retrieved from Stable Diffusion with the prompt “a realistic airplane flying in a blue sky.”

Governance—not just AI governance, but any governance—is the set of policies and processes that guides a system, usually to maximize benefits and mitigate risks. Governance provides oversight, ensures guardrails are in place to reduce risk, enables faster growth, and facilitates stable and productive collaboration.

We can illustrate this by looking at a simple example: a commercial flight.

Airplanes are complex systems, and keeping them in the air—and safely getting them back on the ground—requires massive oversight. There are a few different components involved in flying a plane safely:

  1. The plane itself: the plane itself is made up of many parts, from different manufacturers that need to be tested regularly for continued functionality.
  1. A cross-functional team: the captain, cabin crew, mechanic, traffic controller, the engineers that originally built the plane in the first place, and even the passengers themselves all have critical roles to play in making sure that every flight goes smoothly.
  1. Tools: there are many tools involved in the testing, maintenance, and operation of the plane: the screwdrivers and torque wrenches mechanics use to adjust the engines, the pressure gauges, and altimeters that pilots use to make decisions in the cockpit, and the radar devices in the air traffic control tower are all examples of tools that are essential to safe flight. 
  1. Data: each member of the team is constantly getting data and feedback from the tools that they are using before, during, and after the flight—the pilots are monitoring windspeed and altitude, the cabin crew are checking on the status of the passengers, the air traffic controller is receiving input from multiple flights on location and landing time. There’s a tremendous amount of data that needs to be processed and acted upon to get a plane off the ground and back down. 
Each of these components is critical, but the last one is what brings them all together to keep the plane in the air:
  1. The rules, regulations, and guidelines that bring people, tools, and data together (aka governance): without a clear plan that articulates how these cross-functional stakeholders will come together to make effective decisions, this airplane would be in chaos. What checks must be performed before the flight? When is the plane deemed “ready to fly”? Governance is about defining how people will work together, what tools they will use to collect which data, and how they will make decisions based on that information—all in the name of keeping the plane in the air.

As for the plane, a chief concern for many complex systems is proactively preventing catastrophic failures, which cannot be repaired once committed. In other words, unlike simpler systems that can actually rely on a test & learn approach—learning from mistakes or missed opportunity—the failures of complex systems can result in irreparable damages, like a plane crash. Listening to different stakeholders and considering all possible risk threats becomes critical because after the harm is done, there is no way to turn back the clock.

AI Governance prevents AI failure.

Retrieved from Stable Diffusion with the prompt “broken robot with smoke around.”

An AI system is, just like a plane, a complex system with similar components—people, tools, and data. And, just like a plane, if these components aren’t brought together to keep the system “in the air,” behaving as desired and expected, the consequences can be severe and inflict harm. 

  1. The AI System itself: just like a plane, an AI system is made up of multiple components that may come from different sources—a set of machine learning models, each trained on a dataset, designed to come together to perform a specific task; and just like a plane, each of these components must be tested both individually and together to ensure that the system is continuing to function as expected and desired.
  1. Diverse stakeholders: from the technical teams of data scientists and ML engineers who build and deploy the systems to the business stakeholders who decide where and how they should be used, to the regulators who decide where and how it can be used, to the end users and others impacted by the system—there is a complex network of people who who have a stake in an AI system and its governance.
  1. Tools: if you’ve been tracking the venture-backed startup ecosystem, you are well aware of the Cambrian explosion of data, development, and MLOps tools that have popped up to help organizations measure and manage AI systems and the data needed to train them—these are the screwdrivers and torque wrenches of the AI development world.
  1. Data: data is, of course, the lifeblood of building AI systems, but the data we’re talking about here is data about an AI system. During development and deployment, organizations also collect a lot of data about their systems—performance metrics like precision and recall, bias metrics like demographic parity ratio and equalized odds difference, drift metrics like population stability index, and more. There are a million different ways to analyze an AI system and its training and production datasets, and all of these data points contribute to an understanding of how the system is behaving and whether that behavior is acceptable. 

Just as we saw with the plane, without a shared understanding of how these components come together—who is doing what with which tools, and what data is needed to make which decisions—an AI system can easily go “off the rails.” We don’t need to look further than some recent headlines to see just how grave the consequences can be. And so that brings us to the last component of an effective AI system, AI Governance:

   5. AI Governance: the collection of policies and processes that bring together the diverse stakeholders involved in developing and using AI systems, to use the right tools to collect the right data and make better decisions about how these systems should be built, deployed, used, and managed to maximize benefits and prevent harm.

One important way AI systems differ from planes is in their maturity. Planes are relatively well-understood systems, with decades of knowledge informing their technology, management, and effective use. AI systems are decidedly not like this. How does this affect governance? It means that governance will have to be more adaptable—less about defining hard rules and more about creating an agile system of oversight and coordination that can evolve along with the technology. Balancing this adaptability with standardization and process is the central challenge of AI governance. 

Done right, well-governed AI systems will be better aligned with their design goals while iteratively and intentionally improving along with AI technology.

From Principles-to-Practice

So far, we’ve made the case for governance as an essential tool to ensure that a complex system functions in the ways that we want and need it to function. In our plane scenario, the desired functioning of the plane—to fly from point A to point B safely—is clear. But in our discussion of AI systems and AI governance, the ends are much less clear. Unlike planes, AI systems do not have universal goals but instead can perform a myriad of different tasks with different end results. AI governance, therefore, cannot be “one size fits all” but instead must be tailored to the specific goals of the AI system in question.

Retrieved from Stable Diffusion with the prompt “realist piece of paper with a plan.”

Where does the goal for the AI system come from? 

As it often is, the answer to this question is that it depends. From a business perspective, often the goal is narrowly to improve some process to increase profit. The AI system is a widget in a cascade of widgets, and its goal is to perform its well-scoped function effectively. A broader perspective is that the AI system should do something while respecting other goals, like fairness, transparency, or other “Responsible AI” characteristics. Still broader may include recognizing the potential for negative externalities and minimizing them. Which of these sets of goals are applied to a system relates to both the “soft laws” (e.g., cultural expectations, values reflected by AI developers), standards developed by standard-setting bodies, “hard” laws indicating particular requirements for different systems, and, most critically, the actual role of the AI system in some larger system.

For example, if a bank is developing an AI system that predicts whether an individual is creditworthy based on a number of financial data points, they are going to want this system to be accurate; that is, the bank has a vested business interest in ensuring that they avoid giving loans to people who cannot pay them back. At the same time, if this system is being used in the United States, there are legal requirements for credit risk prediction systems around non-discrimination based on protected attributes like race and gender, and explainability, such that the bank can provide individuals with a clear explanation of why they were denied credit. This set of needs—efficacy, fairness, and transparency—come together to describe the acceptable solution space in which the AI system must operate.

How do you translate an acceptable solution space into specific requirements, policies, and processes?

One can think of AI governance as starting by clearly and transparently articulating the goals for the AI system. Only with a clear definition can the difficult job of operationalization begin. “Operationalization” is the process of taking an abstract concept or goal (“make a transparent, fair, effective AI system”) and defining the measurements and actions that will instantiate it. You can also call this moving from “principles to practice”. Every policy choice, and every measurement decision, is a step in making loose goals concrete. 

Operationalization is like setting KPIs in a business. And, just like setting KPIs, operationalization can be helpful and imperfect (often both at the same time!). Sometimes a particular KPI may miss critical aspects of the broader objective that are only appreciated with time. Just as KPIs change and evolve with new information, so too must any operationalization evolve over time. 

With that said, there are useful ways of breaking down the broader problem of operationalizing goals into an AI Governance process. In Part #2: Operationalizing Responsible AI: How do you “do” AI Governance?, we will talk through these components in detail.  But in the meantime, here's a sneak peek at the four distinct steps that make up both the linear and iterative process of AI Governance:

  1. Alignment: identifying and articulating the goals of the AI system.
  2. Assessment: evaluating the AI system against the aligned goals. 
  3. Translation: turning the outputs of assessment into meaningful insights.
  4. Mitigation: taking action to prevent failure.

In Conclusion

There's a lot of buzz around AI these days. And while it's exciting to think about the potential that AI has to change our lives—and change them for the better—we need to be aware of the risks involved. At the end of the day, AI systems do represent complex systems that can cause irreparable damage to society, and if we don't take the proper steps to ensure that AI systems are built, deployed, used, and managed responsibly, that fate is inevitable.

Retrieved from Stable Diffusion with the prompt “humans working in collaboration.”

That's why AI Governance should not be tacked on as an afterthought but considered in collaboration with a diverse group of stakeholders from day one. AI researchers and engineers must start building ethical considerations into their design process; companies should establish governance mechanisms before introducing AI into their operations, and regulatory bodies must ensure they are considering all safety components in their existing frameworks for policymaking. Overall, solving the inherent challenges of AI will require time and a lot of collaboration, but there is no other way: society's safety (and AI's future) depend on us getting this right.

🙋Interested in learning how to implement AI governance? Reach out to us at demo@credo.ai.

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.