AI Governance

Designing Truly Human-Centered AI

As we enter the era where AI has the potential to impact almost every aspect of our lives, there is a growing need to ensure that AI systems are designed with human values and experiences at their core. This is a high level introduction to Human-Centric AI (HC-AI), a Responsible AI methodology.

December 8, 2022
Author(s)
Kyle Ledbetter
Contributor(s)
No items found.
No items found.

Guidelines and best practices to ensure that AI systems are designed with human values and experiences at their core.

As we enter the era where AI has the potential to impact almost every aspect of our lives, there is a growing need to ensure that AI systems are designed with human values and experiences at their core. This is a high-level introduction to Human-Centered AI (HCAI), a Responsible AI methodology for anyone building or using AI systems (such as researchers, data scientists, developers, and product managers).

An Evolution of Human-Centered Design

As Head of Design at Credo AI, the first step in designing the Credo AI Responsible AI Platform was to ensure that we address and consider real people’s needs and concerns at every stage of our software design and development lifecycle. Humans are never an afterthought. The value and real world outcomes of our product cannot be unintended and accidental. Also, these needs change over time and must be constantly evaluated and revisited. It is our belief that these same approaches, methodologies, and principles can apply to the development and usage of Artificial Intelligence.

Context is Key

For truly Human-Centered Design, societal, cultural, and any contextual information is crucial to consideration of design. The same applies to Responsible AI. When designing, developing, and assessing an AI Use Case the unique blend of the industry, model type, training datasets, downstream populations impacted, and stakeholders involved must all be considered at every stage. This application of Human-Centered Design to AI is logical evolution as “Human-Centered AI”.

Human-Centered AI

  1. Humans must consider humans when developing AI
  2. Humans must be represented in AI training and validation data
  3. Humans must be involved in the governance and monitoring of AI
  4. Human stakeholders must be able to understand how the AI works
  5. Humans must be valued over performance and profitability of AI

Humans must consider humans when developing AI. 

As our Head of Product, Susannah Shattuck says, "if you’re not validating your system for alignment with your ethics and values until it’s already been built, right before deployment, then you’re setting yourself up for failure."

Evaluating and managing human impact of an AI system is something that needs to happen throughout the design, development, and deployment lifecycle, at every step along the way. You must start with the use case, the context, and how humans will be impacted. When we develop new technologies, the human aspect must be considered at the very beginning, not just evaluated before deployment.

Hypergiant offers a useful framework for evaluating a use case for ethical decision making, called “TOME” (Top Of Mind Ethics):

  1. Establishing Goodwill
    Does this choice reflect positive intent alignment with our company values?
  2. Categorical Imperative
    View all of society as our stakeholders. If every company in our industry, and every industry in the world used AI in the way proposed, what would the world look like? Is that a world we want to live in?
  3. Law of Humanity
    What impact will we have on society if we deploy AI in this way? Are we using people simply as a means to an end?

At Credo AI we onboard every new customer with an AI Use Case alignment session, we bring together diverse stakeholders to come together and align on the intent and impact of their AI Use Case. Also, in the Credo AI application, a Use Case prerequisite is to provide contextual information that will lead to recommendations for regulation and compliance, as well as technical assessment guidance.

(All) Humans must be represented in AI training and validation data. 

A large percentage of bias is introduced via the training data during model development. This is partially due to the challenges in the collection and access to data for training and validation.

By appropriately representing the people who will be impacted in AI training and validation data, you can gain assurance that AI systems will reflect human values and do not perpetuate biases or discrimination.

A dataset fairness assessment is a value method of evaluating the objectivity, completeness and robustness of the data in a given dataset. These assessments can be used to determine whether a set of data is biased or if it is representative of all groups within the population.

Humans must be involved in the governance and monitoring of AI. 

Not only must humans be involved, those humans must be diverse in background and profession. This is why Credo AI was designed as a multi-stakeholder product from the start. 

Humans must be involved and in-the-loop of governance and monitoring of AI. Although AI has proven to be generally reliable, it can still produce errors with serious consequences.

To avoid these possible issues, it’s important that organizations establish a safety net that includes human checks and balances for when something does not work as expected.

Human stakeholders must be able to understand how the AI works. 

A core tenant of Responsible AI is transparency, which includes explainability. Setting context for an AI Use Case is step 0 for transparency, and is the starting point of the journey working with our team.

A human simply cannot govern AI if they can't understand how it functions and what it's supposed to do. It sounds simple but it's extremely hard to do. Also, the downstream humans impacted by AI must be made aware AI is being used and how it's affecting them.

This transparency isn't just necessary for use cases like home loan approval and credit review, it’s also critical for retaining trust with employees and earning trust with potential candidates in the hiring process. Perhaps the most crucial need for transparency arises in the use of AI in the government, police, and military. 

Humans must be valued over performance and profitability of AI. 

The most common dilemma in adopting Responsible AI (RAI) risks is valuing profits and performance over people. As an AI development community, we must be aware of limitations and be ready to trade-off performance in favor of fairness. 

Once adopted, RAI governance and best practices make this decision easier. If you are aware and considering these factors earlier in the ML development cycle, it's far easier to acknowledge them and adjust accordingly before the business is sold on certain performance metrics. 

It is easy to put the bottom line first, but we must ensure that the human beings impacted by AI are our primary concern.

In closing, which scenario sounds better?

A: AI enhancing and augmenting humans to reach new heights (hopefully the obvious choice)

or

B: AI operating in the shadows and impacting our lives in unimaginable ways (yikes)

In the end, is the goal for humans to become data points and resources for a system run by AI? Or for humans to become augmented by AI like enhanced superheroes?

And if it’s a bit too science fiction to imagine being a superhero team, perhaps a super team of scientists, developers, designers, and product visionaries is more accurate…

Did you spot the use of AI throughout this publication?

All of these images were generated with DALL·E 2, and sections of text were edited with Copy.ai. These are basic and humble beginnings of how AI can be used as an enhancement for a task, yet they’re already introducing incredibly thought provoking questions of ethics, licensing, rights, and responsibility.

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.