Generative AI

Navigating The GenOps Market: Tools That Promote Responsible Practices And De-Risk Generative AI

With the rapid expansion of Artificial Intelligence’s capabilities, genAI is the hottest topic in the tech industry. From Google Docs to Excel to Firefly, companies around the world are integrating generative AI into their products to streamline processes and boost productivity. As more organizations adopt this technology, it's clear that generative AI is the future of business. While the promise of genAI is immense, the potential risks attached to this technology cannot be overlooked. From the rise of harmful content to concerns around cybersecurity, there are numerous potential downsides and risks to its development and use, and it is critical for businesses to address these issues to ensure that genAI is implemented in a safe and responsible manner. If you're someone who wants to start using genAI, but is worried about the potential risks of these tools, don't worry. You're not alone! (And there’s good news!) Fortunately, there are numerous low-barrier tools available in the market that can help address the responsibility dimensions of genAI, like fairness, transparency, and explainability. These "genAI Ops," a la MLOps or DevOps, can help you mitigate risk and improve the AI ROI of your business without requiring substantial technical innovation. At Credo AI, we're committed to highlighting current low-barrier genAI Ops tools, so organizations all over the world can start taking action today to unlock the full potential of genAI responsibly. If you're interested in a more comprehensive solution and guidance, register today for our GenAI Trust Toolkit Early Access program! Without further ado, let’s talk genAI Ops!

March 30, 2023
Author(s)
No items found.
Contributor(s)
Catharina Doria
Ian Eisenberg
No items found.

In this post, we will explore some of the top low-barrier tools that can assist you in de-risking your Generative AI efforts and ensure that they adhere to the best practices of Responsible AI. 

With the rapid expansion of Artificial Intelligence’s capabilities, genAI is the hottest topic in the tech industry. From Google Docs to Excel to Firefly, companies around the world are integrating generative AI into their products to streamline processes and boost productivity. As more organizations adopt this technology, it's clear that generative AI is the future of business. 

While the promise of genAI is immense, the potential risks attached to this technology cannot be overlooked. 

From the rise of harmful content to concerns around cybersecurity, there are numerous potential downsides and risks to its development and use, and it is critical for businesses to address these issues to ensure that genAI is implemented in a safe and responsible manner. 

If you're someone who wants to start using genAI, but is worried about the potential risks of these tools, don't worry. You're not alone!

(And there’s good news!)

Fortunately, there are numerous low-barrier tools available in the market that can help address the responsibility dimensions of genAI, like fairness, transparency, and explainability.

These "genAI Ops," a la MLOps or DevOps, can help you mitigate risk and improve the AI ROI of your business without requiring substantial technical innovation.

At Credo AI, we're committed to highlighting current low-barrier genAI Ops tools, so organizations all over the world can start taking action today to unlock the full potential of genAI responsibly. If you're interested in a more comprehensive solution and guidance, register today for our genAI Trust Toolkit Early Access program

Without further ado, let’s talk genAI Ops!

Simplifying genAI: An Overview of Categories

When it comes to genAI Ops tools, there are two crucial dimensions: stakeholder categorization and tool categorization.

1. Stakeholder Categorization:
From end-users to technical developers, different stakeholders play a crucial role in the development and use of genAI. Today, we will use four abbreviations (U, D, F, and GS) to discuss stakeholder categorization and how it can inform your selection of genAI Ops tools. These correspond to the following:

  • U: Users of genAI and their organizations, such as someone using ChatGPT for copywriting or Midjourney for design.
  • D: Application developers who start with a foundation model (such as GPT-3) and build an app on top (such as ChatGPT or Jasper).
  • F: Foundation model developers who create the underlying genAI models.
  • GS: General stakeholders who are not directly using the genAI but are affected by it in some way, such as a college professor trying to catch or avoid plagiarism or a social media network trying to filter AI-generated or toxic content.

2. GenAI Ops Tools Categorization:
Different genAI Ops tools require different levels of user involvement and automation. Here are some of the most common categories and the ones being addressed today:

  • Automated: These tools can mitigate risk without requiring constant human input. 
  • User-in-loop: These tools require some user effort but can be automated depending on the value of doing so, depending on your scale. 
  • User-in-loop II: These tools are simpler and easier to use but can be slower due to the need for user input and are harder to automate. 
  • Others: These are best practices that aren't specific to genAI but can still be useful.

With these abbreviations and categories in mind, we can explore the different tools that can be used to ensure responsible development and use of genAI by each type of stakeholder.

Breaking It Down: The Different Options Of genAI Ops Tools In The Market. 

a. Automated genAIOps Tools:

Organizations are seeking to maximize their productivity and efficiency with the help of advanced technologies like genAI. In response to this growing demand, we will cover accessible and user-friendly automated genAIOps tools that cover critical areas such as API/App Settings, AI Detection, and Content Moderation.

  1. API/App Settings [U, D, F]

Are you considering using genAI APIs like OpenAI or fully user-facing tools like CoPilot? 

Although these tools offer incredible benefits and can significantly reduce the workload for developers and creators, it is important to remember that they also come with certain risks. These risks include the potential for inappropriate or offensive content, intellectual property infringement, bias or discrimination, and privacy concerns.

Fortunately, there are tools that can support you in mitigating these risks:

1. Github CoPilot has settings to prevent public code from being generated, enabling you to limit the risk of violating code licenses. Additionally, CoPilot has a setting that prevents your code from being used for downstream development, limiting outbound intellectual property risk. 
2. ChatGPT's default settings prevent OpenAI from using your conversations in future model training, which can help limit the possibility of leaking company secrets.

By understanding the available API and App Settings and choosing risk-conscious default settings or opting into risk-reducing settings, you can protect your organization from the potential risks associated with API/App Settings. 

Up-front time commitment: ~ 30 minutes for implementation per app or model.
Ongoing time commitment: minimal, with periodic reviews of settings only requiring a few minutes each quarter.

Important: For genAI RFPs, our Credo AI Vendor Portal is an ideal resource to request and receive reports from candidates on the availability of de-risking settings.

  1. AI Detection [GS]

As genAI becomes more popular for content generation, it's essential to distinguish between AI and human-generated content to avoid risks of inappropriate or biased content. AI Detection can foster improved efficiency, scalability, and cost-effectiveness.

Here are a few examples of genAIOps tools for AI Detection:

1. Hive has a powerful tool that allows digital platforms to easily identify and moderate realistic synthetic images and video.
2. OpenAI has made available a tool to detect whether articles are AI-generated or human-written.

Both these tools will analyze content and provide a score of how likely the text is to be human- or AI-generated and are customizable, allowing the user or end-user to pick a threshold for "rejection" of AI-generated content based on their tolerance for inaccurate classification.

Upfront time commitment: one hour to write a simple script that applies the moderation tool for integrating a detection API.
Ongoing time commitment: minimal, with periodic reviews of suitability/thresholds.

Side note: For Application developers and Foundation model developers, these tools should be used to programmatically filter out AI-generated content from training data used for future models. 

  1. Content Moderation [U, D, GS]

GenAIOps tools can be used for content moderation to evaluate user-generated content and analyze its features, such as toxicity, agreeableness, conciseness, and opinion sharing, ensuring that the content presented to the audience is suitable and relevant. 

Here are some examples of content moderation tools:

1. OpenAI: This tool can filter out hate, threats, self-harm, sexually explicit content, and violence, and the user of the API can choose to filter all or a subset of these categories. This tool is free and English-focused.
2. Hive: This tool can filter out sexual content, hate, child exploitation, violence, spam, child safety, cyberbullying, drugs, privacy violations, gibberish, and profanity. It supports 17 languages and is a paid service.
3. Perspective API: This tool can filter out toxicity, severe toxicity, identity attacks, threats, profanity, and sexually explicit content. It supports 18 languages and is a free service with rate limits.

Upfront time commitment: a few hours for integrating a detection API into a company’s data flow.
Ongoing time commitment: minimal for periodic reviews of suitability and thresholds.

B. User-in-Loop genAIOps Tools I:

  1. Behavior modification [U, D, F]

Behavior modification using genAI involves instructing a chatbot to exhibit specific behaviors, such as being concise or demonstrating expertise in a particular subject. This can be achieved through prompt pre-pending, which can be a baseline set by the developer or a per-use tool. 

The main challenge with prompt pre-prepending is coming up with the prompt that will achieve your desired behavior. While some trial-and-error can usually work, there are many resources where specific prompts and tips for prompt engineering are shared. Automated tools like the one found in this LinkedIn post can further help streamline the process.

Examples of ChatBots:

1. OpenAI’s ChatGPT.
2. Anthropic’s Claude.

While this op remains more “art than science,” there is compelling evidence that behavior modification via pre-prompting can reduce the risk of undesirable outputs. For instance, simply instructing a chatbot with: “Please ensure your answer is unbiased and does not rely on stereotypes,” can reduce the bias of a chatbot by more than 80%. For users of chatbot UIs, this pre-prompting can take place at the start of a dialogue. With the release of GPT-4, OpenAI now treats behavior modification via pre-prompting as a first-class feature through the “system message” component of its API.

Upfront time commitment: ~ 10 hours of research on suitable pre-prompting to adopt this approach. However, for less sophisticated users with more targeted needs, the upfront time commitment for model use is around 1-2 hours to identify a suitable set of pre-prompts. This time can be reduced if using a pre-pender tool like the one found in the LinkedIn link.

Ongoing time commitment: minimal.

  1. Controlling Downstream Use [D, F]: 

As a staunch supporter of, and participant in the open-source community, Credo AI understands the desire of genAI developers to publish their model weights, code, or data. Because of the rapidly expanding capabilities of these models, outlining responsible guardrails for downstream developers and users is incredibly important. However, with the expanding capabilities of these models, it is crucial to establish responsible guardrails for downstream developers and users to prevent their misuse.

Here are a few tools that can help control downstream use: 

1. Responsible AI License (RAIL): a legal agreement that outlines the acceptable use guidelines for generative AI models.
2. HuggingFace Gated Models: a platform that provides access control for models and datasets through gated repositories.

Attaching a RAIL or gating your model with HuggingFace’s Gated Models offering can help to control the use and development of generative AI models and prevent their misuse. 

By limiting access to the model and controlling the types of data it can be used on, creators can reduce the risk of the model being used for harmful purposes. Additionally, by placing restrictions on its use, such as limiting it to research purposes only, the creators can ensure that the model is being used in an ethical and responsible way.

Upfront time commitment: ~2 hours to outline acceptable use guidelines.

Ongoing time commitment: minimal for RAIL (less effective at controlling downstream use); and variable for HuggingFace, depending on the number of users applying for access (more effective).

C. User-in-Loop genAIOps Tools II:

Augmenting the above list of user-in-loop approaches to de-risking, we also highlight content review as an effective but less scalable op. This approach can be automated to a degree, but the value of doing so depends on your organization’s specific needs.

  1. Content review [U]

Content review is an essential step in the development process. It involves reviewing each piece of content before it is published or pushed to production to ensure its accuracy and relevance. This step is already absolutely integral to a mature software development workstream and should also be adopted for text and image creation.

Examples of tools:

1. HUMANS! Even if technology makes up most of your creative process, final review by a human review is essential to ensure that the content aligns with the organization's values and ethics and to catch any nuanced issues that AI may miss, such as cultural sensitivity, emotional tone, or in the case of code, security vulnerabilities and subtle bugs.
2. Chatbots: with the right prompt, it can be a great ally to speed up a human review!

While a content review is time-consuming, there are some cases where AI can help make the process more efficient. For instance, AI can help mitigate the operational risk associated with publishing low-quality content. One can explicitly ask highly-capable AIs, like ChatGPT, to critique and revise content. AI can help make the content more suitable for the intended audience and match the tone of the author’s organization.

Other risks cannot trivially be mitigated with AI-assisted content review. For instance, AI is known to “hallucinate” or make up facts. This behavior is very hard for an AI to detect, so it’s critical to have a human in the loop for, at a minimum, the final sign-off before publication or use.

Upfront time commitment: 2 hours for aligning content policies.

Ongoing time commitment: vary depending on the length and quantity of published content, but using ChatGPT or another highly-capable AI as a critic can lead to significant time savings of up to 50%.

Conclusion:

GenAI is rapidly transforming the business landscape, and as more organizations adopt this technology, it is essential to address the potential risks associated with its development and use. However, there are various low-barrier tools available, such as genAI Ops, that can help mitigate risks and improve the ROI of your business. By categorizing stakeholders and understanding the different genAI Ops tool categories, organizations can make informed decisions about the tools they need to ensure responsible genAI development and use. With these tools, businesses can unlock the full potential of genAI responsibly, and Credo AI is committed to highlighting current and low-barrier tools to help organizations take action today. 

If you need to operationalize and streamline responsible genAI, join genAI Trust Toolkit early access programs or request a demo at demo@credo.ai!

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.