AI Governance

Designing Truly Human-Centered AI

Kyle Ledbetter
VP of Design
December 8, 2022
12/8/2022
Contributor(s):
No items found.

Guidelines and best practices to ensure that AI systems are designed with human values and experiences at their core.

As we enter the era where AI has the potential to impact almost every aspect of our lives, there is a growing need to ensure that AI systems are designed with human values and experiences at their core. This is a high-level introduction to Human-Centered AI (HCAI), a Responsible AI methodology for anyone building or using AI systems (such as researchers, data scientists, developers, and product managers).

An Evolution of Human-Centered Design

As Head of Design at Credo AI, the first step in designing the Credo AI Responsible AI Platform was to ensure that we address and consider real people’s needs and concerns at every stage of our software design and development lifecycle. Humans are never an afterthought. The value and real world outcomes of our product cannot be unintended and accidental. Also, these needs change over time and must be constantly evaluated and revisited. It is our belief that these same approaches, methodologies, and principles can apply to the development and usage of Artificial Intelligence.

Context is Key

For truly Human-Centered Design, societal, cultural, and any contextual information is crucial to consideration of design. The same applies to Responsible AI. When designing, developing, and assessing an AI Use Case the unique blend of the industry, model type, training datasets, downstream populations impacted, and stakeholders involved must all be considered at every stage. This application of Human-Centered Design to AI is logical evolution as “Human-Centered AI”.

Human-Centered AI

  1. Humans must consider humans when developing AI
  2. Humans must be represented in AI training and validation data
  3. Humans must be involved in the governance and monitoring of AI
  4. Human stakeholders must be able to understand how the AI works
  5. Humans must be valued over performance and profitability of AI

Humans must consider humans when developing AI. 

As our Head of Product, Susannah Shattuck says, "if you’re not validating your system for alignment with your ethics and values until it’s already been built, right before deployment, then you’re setting yourself up for failure."

Evaluating and managing human impact of an AI system is something that needs to happen throughout the design, development, and deployment lifecycle, at every step along the way. You must start with the use case, the context, and how humans will be impacted. When we develop new technologies, the human aspect must be considered at the very beginning, not just evaluated before deployment.

Hypergiant offers a useful framework for evaluating a use case for ethical decision making, called “TOME” (Top Of Mind Ethics):

  1. Establishing Goodwill
    Does this choice reflect positive intent alignment with our company values?
  2. Categorical Imperative
    View all of society as our stakeholders. If every company in our industry, and every industry in the world used AI in the way proposed, what would the world look like? Is that a world we want to live in?
  3. Law of Humanity
    What impact will we have on society if we deploy AI in this way? Are we using people simply as a means to an end?

At Credo AI we onboard every new customer with an AI Use Case alignment session, we bring together diverse stakeholders to come together and align on the intent and impact of their AI Use Case. Also, in the Credo AI application, a Use Case prerequisite is to provide contextual information that will lead to recommendations for regulation and compliance, as well as technical assessment guidance.

(All) Humans must be represented in AI training and validation data. 

A large percentage of bias is introduced via the training data during model development. This is partially due to the challenges in the collection and access to data for training and validation.

By appropriately representing the people who will be impacted in AI training and validation data, you can gain assurance that AI systems will reflect human values and do not perpetuate biases or discrimination.

A dataset fairness assessment is a value method of evaluating the objectivity, completeness and robustness of the data in a given dataset. These assessments can be used to determine whether a set of data is biased or if it is representative of all groups within the population.

Humans must be involved in the governance and monitoring of AI. 

Not only must humans be involved, those humans must be diverse in background and profession. This is why Credo AI was designed as a multi-stakeholder product from the start. 

Humans must be involved and in-the-loop of governance and monitoring of AI. Although AI has proven to be generally reliable, it can still produce errors with serious consequences.

To avoid these possible issues, it’s important that organizations establish a safety net that includes human checks and balances for when something does not work as expected.

Human stakeholders must be able to understand how the AI works. 

A core tenant of Responsible AI is transparency, which includes explainability. Setting context for an AI Use Case is step 0 for transparency, and is the starting point of the journey working with our team.

A human simply cannot govern AI if they can't understand how it functions and what it's supposed to do. It sounds simple but it's extremely hard to do. Also, the downstream humans impacted by AI must be made aware AI is being used and how it's affecting them.

This transparency isn't just necessary for use cases like home loan approval and credit review, it’s also critical for retaining trust with employees and earning trust with potential candidates in the hiring process. Perhaps the most crucial need for transparency arises in the use of AI in the government, police, and military. 

Humans must be valued over performance and profitability of AI. 

The most common dilemma in adopting Responsible AI (RAI) risks is valuing profits and performance over people. As an AI development community, we must be aware of limitations and be ready to trade-off performance in favor of fairness. 

Once adopted, RAI governance and best practices make this decision easier. If you are aware and considering these factors earlier in the ML development cycle, it's far easier to acknowledge them and adjust accordingly before the business is sold on certain performance metrics. 

It is easy to put the bottom line first, but we must ensure that the human beings impacted by AI are our primary concern.

In closing, which scenario sounds better?

A: AI enhancing and augmenting humans to reach new heights (hopefully the obvious choice)

or

B: AI operating in the shadows and impacting our lives in unimaginable ways (yikes)

In the end, is the goal for humans to become data points and resources for a system run by AI? Or for humans to become augmented by AI like enhanced superheroes?

And if it’s a bit too science fiction to imagine being a superhero team, perhaps a super team of scientists, developers, designers, and product visionaries is more accurate…

Did you spot the use of AI throughout this publication?

All of these images were generated with DALL·E 2, and sections of text were edited with Copy.ai. These are basic and humble beginnings of how AI can be used as an enhancement for a task, yet they’re already introducing incredibly thought provoking questions of ethics, licensing, rights, and responsibility.

You may also like

🔥 Unlocking AI's Full Potential: Playbooks for Organizational Change Management in Responsible AI

Leveraging the capabilities of Artificial Intelligence (AI), including Generative AI, has become a key focus for executive teams across various industries worldwide. With tools such as ChatGPT 4 or DALL·E 2, AI is no longer a technology reserved for a select few in the future. Instead, it is here today, offering great benefits to both individuals and businesses. By streamlining operations, improving customer experiences, and boosting productivity, AI can provide a remarkable competitive edge to companies.However, just like with any rapidly advancing new technology, there are significant challenges that need to be addressed. Concerns such as AI bias, misuse, and misalignment have caused customer outrage and prompted companies to implement stricter guidelines for the development and use of AI.Moving fast with AI is critical.But breaking things can lead to more problems than benefits. At Credo AI, one of our missions is to empower organizations to harness the full potential of AI with intention, so they can reap the benefits without compromising speed. This means supporting organizations to move fast without breaking things.To support organizations in this endeavor, we have created the ultimate resource for effective, Responsible AI change management: our Responsible AI Playbooks (5 of many more to come! 😉). Regardless of the stage of AI maturity, our comprehensive guide provides practical guidance for organizations and employees to adopt and implement Responsible AI with ease, allowing them to quickly leverage and unlock the benefits of AI without risking any potential damage in the process. Are you ready to realize the full potential of AI? We are! Let’s start!

Growing Pressure to Regulate AI: Proposed State Bills Call for Impact Assessments and Transparency

In recent weeks, there has been a significant increase in the number of AI-related state bills introduced across the United States.This reflects greater pressures to address AI and automated decision-making systems used in the government and private sector – and the potential risks they present. States have taken different approaches to fill the current gaps in regulation, including the development of task forces and the allocation of funding for research. Additionally, a number of bills have proposed measures aimed at increasing transparency around AI systems, including requirements for algorithmic impact assessments and registries/inventories of AI systems used. These transparency measures are growing in popularity as a regulatory tool to ensure that AI systems are trustworthy and safe, affecting developers and deployers of AI products, both in the private and public sectors.

Tools for Transparency: What Makes a Trustworthy AI System? | MozFest 2023

In recent years, the development and use of artificial intelligence (AI) systems have skyrocketed, leading to an urgent need for accountability and transparency in the industry. To shed light on this topic, Ehrik Aldana, Tech Policy Product Manager at Credo AI, was invited to give a lightning talk at Mozilla Fest 2023.

Join the movement to make
Responsible Al a reality