Credo AI Perspective on Voluntary Commitments to Manage AI Risks Announced by OSTP

With the most recent announcement by the White House regarding the “Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,” Credo AI is encouraged to see the continued dedication from the Biden-Harris administration to the robust development of responsible AI governance.

Navrina Singh
Evi Fuelle
7/22/2023
Contributors:
Ian Eisenberg
Eli Sherman, Ph.D
Ehrik Aldana
Susannah Shattuck

With the most recent announcement by the White House regarding the “Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI”, Credo AI is encouraged to see the continued dedication from the Biden-Harris administration to the robust development of responsible AI governance. 

This swift action continues to showcase a serious commitment by the Biden-Harris Administration  to push industry to uphold the highest standards, and ensure that innovation does not come at the expense of Americans’ rights and safety. Both everyday end users of AI and the U.S. federal government deserve more transparency when it comes to these extremely powerful models. However, even more work is needed to make today’s commitments actionable, impactful, and global in scope. 

Now, it will be all about the next steps, namely: the announcement (as part of the White House Fact Sheet) that an Executive Order would be developed in coordination with these commitments. 

"Voluntary commitments don't have significant impact we're looking to see, so the next step is the Executive Order, and more importantly, what the Executive Order would mandate, because what these large powerful AI systems need is more technology informed regulations and mandates to make sure there is delivery against what is expected of them.…..What is going to be important is the Executive Order that will be coupled with these commitments, that's what our eyes are on right now.”
- Navrina Singh, Bloomberg TV Interview 21 July,
view the full interview here.  

In addition to the forthcoming Executive Order, Credo AI encourages the seven companies named today - Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI - to pair their written commitments to safety, security, and trust with increased financial commitments to AI Safety and governance, including investments in capacity-building by hiring safety experts and  investing in governance tools. Lastly, this work will only truly be impactful if we extend these commitments throughout the entire AI value chain, beyond just this handful of large language model providers, and make these commitments a global effort to prevent regulatory capture and bring true accountability. 

“Credo AI is encouraged to see this continued commitment from the Biden-Harris administration to responsible AI innovation and AI governance. We support these commitments, and we hope that they can be pervasive and adopted across the entire AI value chain, not just limited to a small group of foundation model providers. Both foundation model providers and application developers need to be more transparent, and can share more information regarding their self-governance, including how anticipated downstream uses are understood, safeguarded for, and reported on.”
- Evi Fuelle, Global Policy Director, Credo AI 

Credo AI stands ready and available to support the OSTP and the U.S. Federal Government in this endeavor to improve safeguards for Artificial Intelligence large language and foundation models, and bring greater transparency and accountability to the AI ecosystem at large. 

Credo AI is a Responsible AI Governance Platform that has been a driving force for change within the global AI ecosystem, pushing for greater transparency, governance, and oversight of AI and its responsible development and use since its inception in 2020. Credo AI’s Founder and CEO, Navrina Singh, serves as a Member of the National AI Advisory Committee (NAIAC), and is an OECD AI expert on risk and accountability, and has dedicated her career to advocating for Responsible AI Governance and its safe development and deployment on the global scale. 

Credo AI is a software tool provider; we have built a Responsible AI Governance platform that supports contextual AI governance in companies of all sizes from Global 2000s to early stage startups, across different industries including financial services, insurance, healthcare, human resources, public sector use of AI (including federal procurement), and more. 

Given our AI and ethics expertise and deep experience operationalizing Responsible AI Governance, Credo AI has thought deeply about the various elements that would comprise an effective “AI Code of Conduct” or would otherwise be considered meaningful AI commitments that advance safety, security, and trust for these large and powerful AI models. In conjunction and coordination with what was published by OSTP today, Credo AI has identified the following elements which we believe are critical to be included in any further effective “AI Commitments”:

  • A commitment to provide transparency on the governance processes used to develop and deploy the AI systems, including descriptions of how decisions about the AI system’s behavior are arrived at, how external stakeholders (e.g., governments, NGOs, and general public) are engaged, and how the values implicit in the system are defined and instantiated.
  • A commitment to provide disclosures around the foundation model’s capabilities and risks to support third-party verification, operationalized by quantitative and qualitative evaluations, including information (e.g., code and data) to allow a third party to replicate the foundation model provider’s findings. It is possible for foundation model providers to provide the requisite content for a third party assessor or auditor to test the foundation model’s bias, self-awareness, and other dimensions without releasing actual model code or training data (IP and proprietary information). 
  • A commitment to document if human oversight is being removed, including a description of why and how automation is introduced. The manner in which human oversight is gradually loosened (as more automation is introduced) should be properly documented and disclosed. If human oversight is being taken away, model providers should be describing what oversight is being removed, why, and how. 
  • A commitment to open communication between foundation model providers and application developers, including shared tracking of how foundation models are being used by application developers and incident reporting. 
  • A commitment to the creation of robust human-in-the-loop systems, including ways to ensure that users are made aware if they communicate with AI, and ways to ensure verification via red-teaming and external audits. This should include commitments to governing downstream users - foundation model providers should committee to setting rules that require application developers to impose governance processes and report automation. 
  • A commitment to iterative design, development, and evaluation processes. Evaluation for risks and safety issues should play an integral role throughout research and development processes. Responsibly validating the behavior of AI systems should be a core component of the entire development lifecycle.
  • A commitment to adverse incident reporting made available throughout the value chain, including model providers, application developers, affected parties, the public at large, and governing bodies. 
  • Monitoring incidences: A robust incident reporting and support structure internally and externally to deal with transparency issues and outline reporting requirements, 
  • Feedback loops: A commitment to raising substantial risks externally through avenues like the AI incidence database
  • A commitment to address the risk of misinformation, including a process for how to address misuses that target democratic processes or national security.  
  • A standardized deployment protocol that includes post-deployment monitoring and rollback plan in the event of significant adverse incidents.

We commend the White House for championing voluntary commitments from the leading AI companies in Generative AI. Establishing trust in these companies  and the AI technologies they craft is essential for broad adoption. The American people, and indeed the American government, should expect and receive genuine transparency from these transformative AI companies. The pledges made today mark a commendable beginning, ensuring that trust is rightfully earned and maintained by the companies designing the foundational models that are rapidly shaping our societal future. Still, this journey has only just begun.

We are eager to advance on the commitments made today, and continue  our collaboration with vital public and private partners globally to promote the large-scale adoption of AI that is both safe and responsible. Together, we will safeguard fundamental human rights in this new age of AI.