AI Compliance

OMB’s Privacy Impact Assessments: How agencies can address and mitigate AI-related privacy risks

Credo AI submitted comments on OMB’s Privacy Impact Assessments Request for Information (RFI) on Improving the Use of Privacy Impact Assessments (PIAs) To Mitigate Privacy Risks.

April 9, 2024
Author(s)
Lucía Gamboa
Contributor(s)
No items found.

Credo AI submitted comments on OMB’s Privacy Impact Assessments Request for Information (RFI) on Improving the Use of Privacy Impact Assessments (PIAs) To Mitigate Privacy Risks highlighting how privacy risk management can be compatible with AI governance frameworks. 

Organizations, including enterprises and government agencies, are faced with questions and challenges on how to govern new uses of data and associated risks that emerge from AI.  OMB should consider updating its guidance to improve how agencies address and mitigate privacy risks associated with their use of AI considering the following aspects:

  • Unique Risks of AI: Agencies must go beyond “being good stewards of data,” and deploy risk management plans that take into account the contextual nature of AI, including the innumerable risks associated with AI models interacting with each other. Federal agencies may already have mature frameworks to identify, measure and mitigate privacy risks. Integrating these frameworks into a broader AI governance framework is a best practice that we are already seeing enterprises adopt successfully.
  • System-Level Approach: The unique risks associated with AI will require Federal agencies to take an organizational and system-level approach to managing AI risks. AI governance often takes precedence over traditional data governance. Unlike data governance, which typically focuses on managing and securing data assets, AI governance demands a comprehensive, system-level approach that is tailored to the specific contexts in which AI systems operate. This involves a thorough understanding of the AI models in use, including their training processes, ownership, maintenance, and the comprehensive assessment of associated risks, including privacy concerns.
  • Privacy Across AI Lifecycle: Privacy concerns can emerge at various stages of the AI development lifecycle. OMB guidance should require privacy evaluations at each step of the AI development lifecycle, including during design, development, deployment, and ongoing operation.
  • Advanced Privacy-Compromising Risks: OMB guidance should extend beyond traditional privacy considerations to address new types of privacy-compromising risks and vulnerabilities. At the AI system level, this means agencies should consider risks during training, evaluation, or use of AI and AI-enabled systems, and potential attacks such as membership and attribute inference attacks.

The Path Ahead 

Concrete ways in which OMB guidance can be updated to improve how agencies address and mitigate privacy risks associated with their use of AI, include:

  • Integrate Privacy Evaluations: OMB guidance should explicitly require that privacy evaluations are integrated at each step of the AI development lifecycle, including during design, development, deployment, and ongoing operation. 
  • Employ Privacy-Enhancing Technologies: For AI systems that are developed within Federal agencies, OMB guidance should advocate for the use of differential privacy and other best practices that protect sensitive, personal data during AI model training. This includes assessing the risk of personal data leakage and ensuring that AI outputs do not compromise individual privacy.
  • Ensure Transparency and User Control: Guidance updates should emphasize the need for transparency towards end users about how their personal data will be used by AI systems. Users should have robust consent mechanisms, control over their data, and Federal agencies should periodically audit their data privacy practices and governance.
  • Data minimization and limitations: OMB should promote the importance of collecting only the data necessary for the AI system’s specific purpose, encouraging agencies to regularly review and justify the data they collect and retain.
  • Incorporate PIAs into AIAs: OMB should recommend that PIAs are proactively incorporated into AIAs during critical phases such as system design, when modifying existing systems, and prior to deployment. This integration ensures that privacy risks are addressed alongside algorithmic impacts, promoting a comprehensive evaluation of AI systems.
  • Address Advanced Privacy-Compromising Risks: OMB guidance should extend beyond traditional privacy considerations to address new types of privacy-compromising risks. Agencies should be equipped to understand these attacks, strengthen security measures (for AI and data) and implement strategies to ensure resilience against attacks.
  • Regular monitoring and evaluation: OMB should advocate for ongoing monitoring and evaluations of AI systems, using a comprehensive AI governance framework, to identify and mitigate privacy risks through a system lifecycle including reviews of outputs and impacts.
  • Training and awareness: Agencies should ensure that personnel involved in the development, deployment, and oversight of AI systems receive appropriate training on AI ethics, privacy risks, and mitigation strategies to develop AI literacy. This should also include encouraging agencies to engage with a wide range of stakeholders, including the public, privacy advocates, and AI ethics experts, to inform the development and use of AI systems.

Read our submission here.

At Credo AI, we believe it is important to understand the differences and overlap between AI and privacy risk management to develop effective governance processes. Credo AI stands ready and willing to help OMB operationalize risk management for public sector use. 

To learn more about how we think about privacy and AI risk management and our engagement on this topic check out the following blog posts:

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.