Regulatory

The Need for Comprehensive Technology Governance at the Intersection of Tools & Rules

Evi Fuelle
Global Policy Director
June 29, 2022
6/29/2022
Contributor(s):
Eddan Katz

Effective technology governance requires tools that understand what the technology is doing. This is especially true in the case of Artificial Intelligence (AI), where tools which explain and interpret what the AI is doing become critical. Without such tools, an “information deficit” makes it extremely difficult to evaluate whether the AI systems are creating risks and causing harm. Enforcing any guardrails for AI systems depends on tools that can bridge the gap between the innovative nature of the technology with the very real need for explainability and transparency. 

AI has already become part of the fabric of our daily lives, and policymakers have taken action with “rules” for Responsible AI (RAI) frameworks and guidelines - from the supranational level of OECD AI Principles and the European Union’s AI Act, to the U.S. Department of Commerce’ NIST Risk Management Framework and New York City’s Algorithmic Hiring Bill - innumerable rules define what constitutes “risk,” and outline proposed actions  when these risks arise. However, without the proper tools to monitor, assess, and report on the performance of AI systems, RAI guidelines, frameworks, and legislation become irrelevant. 

Credo AI is the only tool operationalizing responsible AI to comprehensively address and actualize these rules. Any solution that is primarily focused on one or the other is simply incomplete. Credo AI was created to instill confidence in a comprehensive approach to oversight through governance, audit and assurance that is compliant with RAI guidelines, frameworks, and legislation. 

Pro-Ethical Design rather than Ethics by Design

As organizations begin to implement policies and controls in the governance of AI, they should ensure that the focus of their efforts remains on the enabling of responsibility. Ensuring human oversight over the operation of risky AI systems is the fundamental purpose of AI governance. The tools that are put into place as guardrails for  AI systems need to preserve the ability for the individuals doing the monitoring to make choices, rather than be constrained by a predetermined set of rules. 

That is why Credo AI enables those responsible for AI governance to be an active part of the risk management process, identifying where there are risks as they develop. Giving humans responsibility and insight into the performance of systems is necessary for developing RAI. Credo AI fosters an environment of accountability by directing the relevant information to people whose role is to review and approve risky AI system assessment.

Meaningful accountability in AI governance must navigate the convergence of policy at the intersection of tools and rules. Credo AI codifies what good looks like for a use case, an enterprise and regulatory environment and enables contextual governance to measure against those values. Credo AI is uniquely built from the ground up to map the relevant rules concerning a particular AI application onto the tools that do the monitoring and assessment. 

The Case for Context in RAI 

The initiative to implement guardrails in the design, development and deployment of AI systems has seen a great deal of activity at every level of policymaking - from international bodies to local law, and from cross-industry association guidelines to internal corporate norms. One common conclusion at each of these levels is that the effective governance of AI is highly dependent on its context.

AI governance that is contextual must start with the use case in which the AI system is applied. This is exactly why the Credo AI platform gives decision-makers the tools to make assessments according to what those systems actually do. As the rules around which consensus is emerging become increasingly contextual (focused on use cases), people in roles of responsibility regarding AI oversight need to know which rules apply.  

As the transformation of AI becomes ubiquitous, the focus must remain on preserving responsibility over the technology. Consequently, a world where life-altering decisions are automated and predictions about human behavior are determined by algorithms creates an urgent need for human oversight. That oversight is only possible when humans are equipped with the tools they need to realize RAI,  combining pro-ethical design, contextual governance, and human oversight. 

Shaping the Norms of AI Governance

We are on the cusp of an incredible and historic moment, when the rules regarding how to develop and deploy AI are being set. It is fundamentally important that these laws and regulations are acutely aware of how technology can and should work. Rules created in the abstract, without consideration of where they will be applied and how they will be enforced, will neither accomplish their objectives, nor endure the test of time.

Operationalizing RAI governance is not a trivial endeavor. The technology is complex, and rapidly evolving. It is also the case that we are collectively aware of only a small portion of what the potential impacts of AI may be on human lives. Effective agile responsible AI governance will only emerge when we bridge the gap between innovators, technologists, policymakers, and the general public. Providing effective, cutting-edge tools like Credo AI to develop RAI and help humans govern its use has never been more critical.

You may also like

Tools for Transparency: What Makes a Trustworthy AI System? | MozFest 2023

In recent years, the development and use of artificial intelligence (AI) systems have skyrocketed, leading to an urgent need for accountability and transparency in the industry. To shed light on this topic, Ehrik Aldana, Tech Policy Product Manager at Credo AI, was invited to give a lightning talk at Mozilla Fest 2023.

Local Law No. 144: NYC Employers & Vendors Prepare for AI Bias Audit with Credo AI’s Responsible AI Governance Platform

The clock is ticking! With New York City Local Law No. 144’s (LL-144) enforcement deadline fast approaching (April 15th, 2023), companies are scrambling to ensure they comply with the new AI regulation. While some organizations are still unsure how to start their journey, others—like AdeptID—have already taken the lead to demonstrate their commitment to Responsible AI practices. In this blog post, we will briefly describe Local Law No. 144, share how Credo AI is supporting HR Employers and Vendors, and showcase how we have supported AdeptID in their efforts to adhere to the legal requirements established by LL-144.

NYC AI Bias Audit: 7 Things You Need to Know About the Updated NYC Algorithmic Hiring Law

The clock is ticking, and we at Credo AI are committed to helping organizations understand what the recent updates and key aspects of the new regulation mean for them. In today’s blog post, we will outline seven things employers and employment agencies need to know about the requirements for NYC AI Bias Audits, also known as NYC Local Law No. 144.

Join the movement to make
Responsible Al a reality