Partnerships

Credo AI’s Participation in IAPP Global AI Governance Conference: From Data Stewardship to AI Governance

By thoroughly evaluating the entire AI use case, model, and data aspects, organizations can better assess and mitigate risks associated with their AI systems.

November 6, 2023
Author(s)
Lucía Gamboa
Contributor(s)
Evi Fuelle
Susannah Shattuck

As IAPP AI Governance Foundational Supporters, Credo AI was glad to join IAPP at its Global AI Governance Summit last  week in Boston - especially in light of a series of historic global AI Governance moments - including the White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the G7 International Guiding Principles and Code of Conduct, and Bletchley agreement announced at the UK AI Safety Summit. This gathering marked an opportune moment to discuss the relationship and differences between data privacy and AI governance, including a panel discussion with Credo AI’s CEO Navrina Singh on “Navigating AI in the Enterprise.”

Privacy protections are foundational to the responsible AI journey. However, beyond  managing privacy risks by “being good stewards of data,” organizations have a responsibility to thoroughly understand and monitor their AI-specific risks. To manage AI-specific risks, enterprises should be asking questions such as: 

  • For what specific use cases are these models and data being applied? 
  • What models are we using and how were they trained, are they fit for purpose? 
  • Who owns and maintains the models and data within the system? 

By thoroughly evaluating the entire AI use case, model, and data aspects, organizations can better assess and mitigate risks associated with their AI systems.

Navigating the Distinct Challenges of AI Governance

Governing AI systems presents unique risks and challenges compared to traditional data risk management. While data governance focuses on managing identifiable information, AI governance requires a context specific, principles-based, continuous approach to oversee entire systems. This systemic oversight is crucial for several reasons:

  • Multiplying intangible risks: Unlike data privacy, AI risks are difficult to quantify, because harm can compound across systems and the AI lifecycle. Models may propagate biases or be misused in unforeseen ways. These cascading risks require proactive governance. For instance, personal financial data collected by a fintech app could pose a privacy risk if repurposed for advertising without consent. Beyond this specific privacy risk, if that same data trains an AI model to predict loan default risk, the potential harms (“AI risks”) multiply over time. The model may inadvertently learn and amplify biases against certain demographics as it interacts with other systems. While the initial privacy breach was contained, the AI risks can compound in unseen ways across interconnected models.
  • Amplification across systems: With AI systems, localized risks can be exponentially amplified as models interact. For example, age data used appropriately by one system may still enable profiling and discrimination when shared across other systems.
  • Variability in upstream and downstream impacts: AI risks can vary significantly based on context. A model safe in one setting may pose new dangers if deployed elsewhere. AI governance must consider use cases and ecosystem impacts. For example, a facial recognition model developed to unlock phones can be reasonably low-risk in that limited use case. However, deploying the same model for law enforcement surveillance poses significant risks to civil liberties. 
  • Emergent unfairness and opacity: As systems interact, unfair outputs and opacity can emerge. AI governance requires that organizations continually re-evaluate evolving systems and establish guardrails to mitigate new risks. For example, a hiring algorithm may seem unbiased when evaluated in isolation. However, when deployed alongside other HR systems, feedback loops could entrench gender biases. Specifically, the AI screening applicants may ingest data from performance reviews that inadvertently penalize behaviors more common in certain demographics. Over time, this could lead the hiring algorithm to penalize those same groups.

Evolving Beyond Data Stewardship

For enterprises, just “being good stewards of data” is no longer sufficient to manage risk - organizations must evolve and learn to responsibly govern their AI systems more comprehensively. While this applies to enterprises of all sizes, building awareness and capacity for small to medium businesses is especially important.

By leveraging responsible AI tools, companies can lower barriers to adoption of AI and proactively address compliance risks. Solutions like Credo AI enable systematic documentation of AI use cases, apply risk-based controls, help with comprehensive risk understanding and management, and compliance to existing and emerging regulations.

A critical step on this governance journey is defining benchmarks for evaluating and auditing AI systems. With the right guidelines and frameworks in place, organizations can implement ethical AI protocols, increase transparency, and reduce regulatory exposure. Prioritizing responsible AI governance helps build consumer trust and competitive differentiation which leads to top line growth and bottom line optimization.

Defining system-level requirements 

While data privacy requires governance at the organization level, AI governance extends beyond that to define requirements at the system level. At Credo AI we believe there is a clear path to effectively govern AI systems which involves: 

  1. Comprehensive repository of ML and AI use case; 
  2. Monitoring model and data governance checks; through actionable checklists;
  3. Performing technical and process evaluations across entire AI lifecycle; 
  4. Adapting to meet internal company policies or external regulatory or industry standards; and,
  5. Gaining certifiable recognition. 

Aligning Privacy and AI Governance

AI risks are multifaceted, unpredictable, and compound rapidly across systems. Companies need to adopt governance practices rooted in comprehensive risk management, contextual governance, and regulatory compliance. By recognizing the distinct risk landscape of AI, we can develop governance that addresses these unique challenges.

Responsible AI governance should align with and uphold privacy rights. To achieve this, privacy professionals and AI governance experts should collaborate to establish best practices that organizations can implement to reduce business, reputational, and regulatory risks related to AI systems. In understanding the unique, contextual nature of AI risk management, companies can integrate ethical AI governance into their operations in a way that complements their existing privacy risk management and provides a competitive edge.

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.