Human-on-the-loop

Humans-on-the-Loop (HOTL) is an extension of HITL, which involves humans providing feedback to the AI system to improve its performance over time. HOTL is typically used when the AI system has reached a certain level of performance but still requires human feedback and intervention to continue improving. In HOTL, humans act as trainers or teachers for the AI, providing labeled data, correcting mistakes, and guiding the AI toward better outcomes. 

HOTL is often used in autonomous vehicles, fraud detection, and medical diagnosis applications.

All Terms

A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z

1

A

2

B

3

C

4

D

5

E

6

F

7

G

8

H

9

I

12

L

13

M

16

P

18

R

19

S

20

T

EU AI Act
EU AI Act

E

AI Governance
AI Governance

A

Accountability
Accountability

A

AI-in-the-loop
AI-in-the-loop

A

AI Alignment
AI Alignment

A

AI Policy
AI Policy

A

AI Risk Management
AI Risk Management

A

Risk Tolerance
Risk Tolerance

R

AI Safety
AI Safety

A

Governance Artifact
Governance Artifact

G

Artificial Intelligence
Artificial Intelligence

A

Artificial General Intelligence
Artificial General Intelligence

A

AI Law
AI Law

A

Assessment
Assessment

A

Autonomous System
Autonomous System

A

Audit
Audit

A

Auditing or Audibility of AI Systems
Auditing or Audibility of AI Systems

A

Credo AI Audit Trail
Credo AI Audit Trail

C

Attestation
Attestation

A

Bias (Social vs. Statistical):
Bias (Social vs. Statistical):

B

Conversational AI
Conversational AI

C

Conformity Assessment
Conformity Assessment

C

Data Quality
Data Quality

D

Evidence
Evidence

E

Explainability
Explainability

E

Fairness
Fairness

F

Foundational Model
Foundational Model

F

General-purpose AI (GPAI)
General-purpose AI (GPAI)

G

Generative AI or genAI
Generative AI or genAI

G

Human-Centered
Human-Centered

H

Human-Centered Design
Human-Centered Design

H

Human-in-the-loop
Human-in-the-loop

H

Human-on-the-loop
Human-on-the-loop

H

Impact Assessment
Impact Assessment

I

Inclusivity
Inclusivity

I

Interpretability
Interpretability

I

Law
Law

L

Machine Learning
Machine Learning

M

Model Card
Model Card

M

Multi-Stakeholder Collaboration
Multi-Stakeholder Collaboration

M

Policy
Policy

P

Credo AI Policy Center
Credo AI Policy Center

C

Credo AI Policy Pack
Credo AI Policy Pack

C

Privacy
Privacy

P

Programmatic RAI Assessments
Programmatic RAI Assessments

P

Regulation
Regulation

R

Responsible AI License
Responsible AI License

R

AI Risk
AI Risk

A

Robustness
Robustness

R

Rulemaking Process
Rulemaking Process

R

Safety
Safety

S

Social-technical Systems
Social-technical Systems

S

Standard
Standard

S

Technical Evidence
Technical Evidence

T

Transparency
Transparency

T

Transparency Report
Transparency Report

T

Trust
Trust

T

AI Use Case
AI Use Case

A

Project Failure Rate
Project Failure Rate

P

Project Rejection Stage
Project Rejection Stage

P

Sunk Project Cost
Sunk Project Cost

S

Brand Risk
Brand Risk

B

Compliance Risk
Compliance Risk

C

Trust Risk
Trust Risk

T

AI GRC Project Rejection Rate
AI GRC Project Rejection Rate

A

Transformative AI (TAI)
Transformative AI (TAI)

T