Regulatory

The Need for Comprehensive Technology Governance at the Intersection of Tools & Rules

Evi Fuelle
Global Policy Director
June 29, 2022
6/29/2022
Contributor(s):
Eddan Katz

Effective technology governance requires tools that understand what the technology is doing. This is especially true in the case of Artificial Intelligence (AI), where tools which explain and interpret what the AI is doing become critical. Without such tools, an “information deficit” makes it extremely difficult to evaluate whether the AI systems are creating risks and causing harm. Enforcing any guardrails for AI systems depends on tools that can bridge the gap between the innovative nature of the technology with the very real need for explainability and transparency. 

AI has already become part of the fabric of our daily lives, and policymakers have taken action with “rules” for Responsible AI (RAI) frameworks and guidelines - from the supranational level of OECD AI Principles and the European Union’s AI Act, to the U.S. Department of Commerce’ NIST Risk Management Framework and New York City’s Algorithmic Hiring Bill - innumerable rules define what constitutes “risk,” and outline proposed actions  when these risks arise. However, without the proper tools to monitor, assess, and report on the performance of AI systems, RAI guidelines, frameworks, and legislation become irrelevant. 

Credo AI is the only tool operationalizing responsible AI to comprehensively address and actualize these rules. Any solution that is primarily focused on one or the other is simply incomplete. Credo AI was created to instill confidence in a comprehensive approach to oversight through governance, audit and assurance that is compliant with RAI guidelines, frameworks, and legislation. 

Pro-Ethical Design rather than Ethics by Design

As organizations begin to implement policies and controls in the governance of AI, they should ensure that the focus of their efforts remains on the enabling of responsibility. Ensuring human oversight over the operation of risky AI systems is the fundamental purpose of AI governance. The tools that are put into place as guardrails for  AI systems need to preserve the ability for the individuals doing the monitoring to make choices, rather than be constrained by a predetermined set of rules. 

That is why Credo AI enables those responsible for AI governance to be an active part of the risk management process, identifying where there are risks as they develop. Giving humans responsibility and insight into the performance of systems is necessary for developing RAI. Credo AI fosters an environment of accountability by directing the relevant information to people whose role is to review and approve risky AI system assessment.

Meaningful accountability in AI governance must navigate the convergence of policy at the intersection of tools and rules. Credo AI codifies what good looks like for a use case, an enterprise and regulatory environment and enables contextual governance to measure against those values. Credo AI is uniquely built from the ground up to map the relevant rules concerning a particular AI application onto the tools that do the monitoring and assessment. 

The Case for Context in RAI 

The initiative to implement guardrails in the design, development and deployment of AI systems has seen a great deal of activity at every level of policymaking - from international bodies to local law, and from cross-industry association guidelines to internal corporate norms. One common conclusion at each of these levels is that the effective governance of AI is highly dependent on its context.

AI governance that is contextual must start with the use case in which the AI system is applied. This is exactly why the Credo AI platform gives decision-makers the tools to make assessments according to what those systems actually do. As the rules around which consensus is emerging become increasingly contextual (focused on use cases), people in roles of responsibility regarding AI oversight need to know which rules apply.  

As the transformation of AI becomes ubiquitous, the focus must remain on preserving responsibility over the technology. Consequently, a world where life-altering decisions are automated and predictions about human behavior are determined by algorithms creates an urgent need for human oversight. That oversight is only possible when humans are equipped with the tools they need to realize RAI,  combining pro-ethical design, contextual governance, and human oversight. 

Shaping the Norms of AI Governance

We are on the cusp of an incredible and historic moment, when the rules regarding how to develop and deploy AI are being set. It is fundamentally important that these laws and regulations are acutely aware of how technology can and should work. Rules created in the abstract, without consideration of where they will be applied and how they will be enforced, will neither accomplish their objectives, nor endure the test of time.

Operationalizing RAI governance is not a trivial endeavor. The technology is complex, and rapidly evolving. It is also the case that we are collectively aware of only a small portion of what the potential impacts of AI may be on human lives. Effective agile responsible AI governance will only emerge when we bridge the gap between innovators, technologists, policymakers, and the general public. Providing effective, cutting-edge tools like Credo AI to develop RAI and help humans govern its use has never been more critical.

You may also like

Better Together: The difference between MLOps & AI Governance and why you need both to deliver on Responsible AI

At Credo AI, we believe that AI Governance is the missing—and often forgotten—link between MLOps and AI’s success to meet business objectives. In this blog post, we’ll start by defining MLOps and AI Governance, how they differ, and why both are needed for the successful realization of AI/ML projects. Let’s take a closer look at MLOps and AI Governance with respect to scope of work, stakeholder involvement, and development lifecycle.

Fast Company Names Credo AI One of the Next Big Things In Tech

Today, I am thrilled to announce that Credo AI has been named by Fast Company as one of the 2022 Next Big Things in Tech – a prestigious award honoring the most innovative technologies that promise to shape industries, serve as catalysts for further innovation, and drive positive change to society within the next five years.

Operationalizing Responsible AI: How do you “do” AI Governance?

Now that we’ve established what AI governance is and why it’s so important, let’s talk strategy; how does one do AI governance, and what does an effective AI governance program look like? At the highest level, AI governance can be broken down into four components—four distinct steps that make up both a linear and iterative process: 1) Alignment: identifying and articulating the goals of the AI system, 2) Assessment: evaluating the AI system against the aligned goals, 3) Translation: turning the outputs of assessment into meaningful insights, and 4) Mitigation: taking action to prevent failure. Let’s take a deeper look at what happens during each of these steps and how they come together to form a governance process designed to prevent catastrophic failure.

Join the movement to make
Responsible Al a reality