Our Methodology for Developing the Profiles

Each Vendor Tool Risk Profile is a report that indicates the risks that apply to a specific  generative AI tool and whether, during development and deployment of the tool, the vendor has taken steps to mitigate those risks based on publicly available documentation and, where possible, technical evaluations. We have summarized this publicly available information into a report template using a standard set of risks and cited the sources of information on any steps or action taken by the vendor to mitigate a specific risk.

In order to create a standard GenAI Vendor Tool Risk Profile report template, we started by defining a standard set of nine AI risks based on existing AI risk frameworks like the NIST AI Risk Management Framework and the OECD AI Principles, as well as recent academic research into the specific risks of generative AI systems. 

These nine risks are:

  • Performance and Robustness: Pertains to the AI's ability to fulfill its intended purpose accurately and its resilience to perturbations, unusual inputs, or adverse situations. Failures can lead to severe consequences, especially in critical applications.
  • Fairness & Bias: Arises from the potential for AI systems to make decisions that systematically disadvantage certain groups or individuals. Bias can stem from training data, algorithmic design, or deployment practices, leading to unfair outcomes and possible legal ramifications.
  • Explainability & Transparency: Refers to the ability to understand and interpret an AI system's decisions and actions, and the openness about the data used, algorithms employed, and decisions made. Lack of these elements can create risks of misuse, misinterpretation, and lack of accountability.
  • Security: Encompasses potential vulnerabilities in AI systems that could compromise their integrity, availability, or confidentiality. Security breaches could result in significant harm, from incorrect decision-making to privacy violations.
  • Compliance: Involves the risk of AI systems violating laws, regulations, and ethical guidelines. Non-compliance can lead to legal penalties, reputational damage, and loss of user trust.
  • Privacy: Refers to the risk of AI infringing upon individuals' rights to privacy, through the data they collect, how they process that data, or the conclusions they draw.
  • Societal Impact: Concerns the broader changes AI might induce in society, such as labor displacement, mental health impacts, or the implications of manipulative technologies like deepfakes.
  • Long-term and Existential Risk: Considers the speculative risks posed by future advanced AI systems to human civilization, either through misuse or due to challenges in aligning their objectives with human values.
  • Misuse: Pertains to the potential for AI systems to be used maliciously or irresponsibly, including for creating deepfakes, automated cyber attacks, or invasive surveillance systems.

To evaluate whether each of these risks has been mitigated for a given tool, we analyzed publicly available documentation from each vendor, including:

  • Any publicly available technical documentation
  • Third party evaluations or research published about the tool and its capabilities or limitations
  • Public lawsuits, regulatory notices, and other legal issues about the tool
  • Public marketing websites and press releases

Based on our expertise in AI risk management and risk mitigation approaches, we have made note of any indications that we were able to find in the above documentation and artifacts related to mitigating the nine risks that we have identified above. When we were not able to find any mention of mitigations undertaken by the vendor, we call this out in the report as a potential “unmitigated risk.”

Looking for more information about the risks of your AI vendor tools?

AI risk evaluation is context-dependent; we have provided a high-level overview of potential AI risks of vendor tools, but without more details about the exact context in which these tools will be used, we cannot provide more detailed risk analysis.

If you are looking to evaluate a generative AI vendor tool for risk within your specific business context and proposed use, Credo AI is here to help. Our third party AI risk management platform makes it easy to collect evidence from your vendors and evaluate whether your vendor tools are meeting your risk and compliance requirements.

Learn more about how Credo AI can help you kickstart your third party AI risk management program today.

Book a demo here.