AI Governance

The Need for Comprehensive Technology Governance at the Intersection of Tools & Rules

Effective technology governance requires tools that understand what the technology is doing. This is especially true in the case of Artificial Intelligence (AI), where tools which explain and interpret what the AI is doing become critical.

June 29, 2022
Author(s)
Evi Fuelle
Contributor(s)
Eddan Katz
No items found.

Effective technology governance requires tools that understand what the technology is doing. This is especially true in the case of Artificial Intelligence (AI), where tools which explain and interpret what the AI is doing become critical. Without such tools, an “information deficit” makes it extremely difficult to evaluate whether the AI systems are creating risks and causing harm. Enforcing any guardrails for AI systems depends on tools that can bridge the gap between the innovative nature of the technology with the very real need for explainability and transparency. 

AI has already become part of the fabric of our daily lives, and policymakers have taken action with “rules” for Responsible AI (RAI) frameworks and guidelines - from the supranational level of OECD AI Principles and the European Union’s AI Act, to the U.S. Department of Commerce’ NIST Risk Management Framework and New York City’s Algorithmic Hiring Bill - innumerable rules define what constitutes “risk,” and outline proposed actions  when these risks arise. However, without the proper tools to monitor, assess, and report on the performance of AI systems, RAI guidelines, frameworks, and legislation become irrelevant. 

Credo AI is the only tool operationalizing responsible AI to comprehensively address and actualize these rules. Any solution that is primarily focused on one or the other is simply incomplete. Credo AI was created to instill confidence in a comprehensive approach to oversight through governance, audit and assurance that is compliant with RAI guidelines, frameworks, and legislation. 

Pro-Ethical Design rather than Ethics by Design

As organizations begin to implement policies and controls in the governance of AI, they should ensure that the focus of their efforts remains on the enabling of responsibility. Ensuring human oversight over the operation of risky AI systems is the fundamental purpose of AI governance. The tools that are put into place as guardrails for  AI systems need to preserve the ability for the individuals doing the monitoring to make choices, rather than be constrained by a predetermined set of rules. 

That is why Credo AI enables those responsible for AI governance to be an active part of the risk management process, identifying where there are risks as they develop. Giving humans responsibility and insight into the performance of systems is necessary for developing RAI. Credo AI fosters an environment of accountability by directing the relevant information to people whose role is to review and approve risky AI system assessment.

Meaningful accountability in AI governance must navigate the convergence of policy at the intersection of tools and rules. Credo AI codifies what good looks like for a use case, an enterprise and regulatory environment and enables contextual governance to measure against those values. Credo AI is uniquely built from the ground up to map the relevant rules concerning a particular AI application onto the tools that do the monitoring and assessment. 

The Case for Context in RAI 

The initiative to implement guardrails in the design, development and deployment of AI systems has seen a great deal of activity at every level of policymaking - from international bodies to local law, and from cross-industry association guidelines to internal corporate norms. One common conclusion at each of these levels is that the effective governance of AI is highly dependent on its context.

AI governance that is contextual must start with the use case in which the AI system is applied. This is exactly why the Credo AI platform gives decision-makers the tools to make assessments according to what those systems actually do. As the rules around which consensus is emerging become increasingly contextual (focused on use cases), people in roles of responsibility regarding AI oversight need to know which rules apply.  

As the transformation of AI becomes ubiquitous, the focus must remain on preserving responsibility over the technology. Consequently, a world where life-altering decisions are automated and predictions about human behavior are determined by algorithms creates an urgent need for human oversight. That oversight is only possible when humans are equipped with the tools they need to realize RAI,  combining pro-ethical design, contextual governance, and human oversight. 

Shaping the Norms of AI Governance

We are on the cusp of an incredible and historic moment, when the rules regarding how to develop and deploy AI are being set. It is fundamentally important that these laws and regulations are acutely aware of how technology can and should work. Rules created in the abstract, without consideration of where they will be applied and how they will be enforced, will neither accomplish their objectives, nor endure the test of time.

Operationalizing RAI governance is not a trivial endeavor. The technology is complex, and rapidly evolving. It is also the case that we are collectively aware of only a small portion of what the potential impacts of AI may be on human lives. Effective agile responsible AI governance will only emerge when we bridge the gap between innovators, technologists, policymakers, and the general public. Providing effective, cutting-edge tools like Credo AI to develop RAI and help humans govern its use has never been more critical.

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.