HomeGlossary
General-purpose AI (GPAI)

General-purpose AI (GPAI)

General-purpose AI (GPAI) refers to AI models trained on broad datasets that can perform a wide range of tasks like writing, coding, summarizing, reasoning, and generating images, without being purpose-built for any single one. 

Unlike narrow AI systems designed for a specific job, GPAI models serve as flexible foundations on which many different applications can be built. Large language models like GPT-4 and Claude are prominent examples.

Explore: The ROI of AI Governance: 2026 Executive Playbook

How GPAI Models Work

A GPAI model is trained on massive volumes of text, code, images, or other data,  often drawn from the public internet, books, and licensed datasets. Through this training process, the model develops broad capabilities: it can answer questions, translate languages, write code, analyze documents, and more, often without any additional task-specific training.

This breadth is what defines GPAI. The same underlying model that drafts a marketing email can also summarize a legal contract or explain a scientific concept. Organizations can then build on top of these models in two main ways:

  • Fine-tuning: Adjusting the model's weights on a smaller, domain-specific dataset to specialize its behavior. For example, training a general language model to focus on medical documentation.
  • Prompting: Directing the model's output through carefully crafted instructions, without modifying the model itself. This is how most enterprise applications work today.

The result is a layered ecosystem: a GPAI model at the center, with dozens or hundreds of downstream applications built around it,  each inheriting both the capabilities and the risks of the underlying model.

GPAI vs. Narrow AI: What's the Difference?

Most AI systems deployed before the generative AI era were narrow: a fraud detection model that does only fraud detection, a recommendation engine that does only recommendations. These systems are easier to evaluate and govern because their scope is fixed.

GPAI models are different in kind, not just scale. Their outputs are open-ended and hard to predict. The same model used to generate helpful customer support responses can also be prompted, intentionally or not, to produce harmful, biased, or misleading content. This unpredictability is one of the central challenges that makes governing GPAI models a distinct problem from governing traditional AI.

Why GPAI Raises Distinct Governance Challenges

Because GPAI models underpin so many applications, risks at the model level can propagate widely. A flaw in how a foundation model handles a particular language, demographic group, or topic doesn't just affect one product; it affects every product built on that model.

Several governance challenges are specific to GPAI:

Opacity of training data: GPAI models are trained on vast datasets that are often poorly documented. Biases embedded in training data can surface in unpredictable ways across downstream uses.

Diffuse accountability: When a GPAI model is used to build a product, responsibility for harms can be unclear, and it raises questions like ‘is it the model provider, the application developer, or the deploying organization?’ This makes assigning and enforcing accountability genuinely difficult.

Systemic risk: The most capable GPAI models are those trained on computing power exceeding 10²⁵ FLOPs, as defined by the EU AI Act, and are treated as potential sources of systemic risk. Their widespread deployment means that failures or misuse could have outsized societal consequences.

Emergent capabilities: GPAI models sometimes develop capabilities that weren't explicitly trained for and weren't anticipated by developers. This makes pre-deployment risk assessment harder than with narrow systems.

Real-World Examples

Example 1: A Legal Tech Company Builds on a GPAI Model 

A legal technology startup builds a contract review tool using a general-purpose large language model as its base. The tool is not the GPAI model itself; it's an application layer around it. 

The startup is responsible for ensuring the tool performs accurately and safely in legal contexts, but the underlying model's behaviors (including any biases or knowledge gaps) flow through to the product. This is the governance challenge of the GPAI ecosystem in practice: downstream accountability depends on upstream transparency.

Example 2: An Enterprise Deploys a GPAI-Powered Chatbot Internally 

A large financial services firm deploys a GPAI-based assistant for internal use, helping employees draft communications, retrieve policy documents, and answer compliance questions. 

The firm didn't build the model, but they are responsible for how it's used. 

They need to monitor outputs, restrict certain use cases, and ensure the system doesn't expose sensitive data or generate non-compliant advice. This is why AI governance programs increasingly need to account for GPAI systems, not just custom-built models.

GPAI in the Context of AI Regulation

GPAI models now have their own regulatory category. Under the EU AI Act, GPAI models are subject to distinct obligations that apply to providers for the organizations that train and release these models. These include transparency requirements, technical documentation, copyright compliance policies, and summaries of training data.

For the most powerful GPAI models, those that may pose systemic risk,  additional requirements apply, like adversarial testing, incident reporting to the EU AI Office, and cybersecurity protections. These obligations took effect in August 2025.

This regulatory treatment reflects a broader recognition: that governing AI application by application isn't enough when a single model underlies thousands of applications. Governance needs to reach the model layer itself.

For enterprises that deploy GPAI-based tools, the regulatory obligations primarily fall on the model provider, but deployers are not off the hook. Appropriate use, human oversight, and ensuring the system is used within its intended scope remain the deployer's responsibilities. Understanding where provider obligations end and deployer obligations begin is an increasingly important part of AI risk management.

For a broader look at how GPAI fits into the current regulatory environment, Credo AI's blog on key AI regulations in 2025 covers the full compliance landscape.

Summary

General-purpose AI (GPAI) refers to AI models capable of performing a wide range of tasks, serving as a foundation for countless downstream applications. Unlike narrow AI, GPAI models are open-ended, difficult to fully predict, and carry risks that propagate across every application built on them. 

This creates governance challenges around training data transparency, accountability, and systemic risk that traditional AI governance approaches weren't designed to address. Regulators have taken notice: the EU AI Act now includes a dedicated GPAI framework, with obligations scaling based on capability and risk. 

For organizations using GPAI-powered tools, understanding the boundaries between provider and deployer responsibility is no longer optional,  it's a core part of operating AI responsibly.

Frequently Asked Questions

Here you can find the most common questions.

Is a large language model like GPT-4 or Claude considered GPAI?

Yes. Any AI model trained on broad data to perform multiple tasks across domains, rather than one specific function, qualifies as GPAI under the EU AI Act's definition.

If my company uses a GPAI-based tool but didn't build the model, do we have compliance obligations?

Yes. The model provider handles obligations around documentation and transparency, but your organization is responsible for appropriate use, human oversight, and ensuring the system stays within its intended scope.

What makes a GPAI model "systemically risky" under the EU AI Act?

The Act uses training compute as a proxy which are models trained on more than 10²⁵ FLOPs are presumed to pose systemic risk. These models face additional requirements including adversarial testing and incident reporting to the EU AI Office.

Other Glossary Terms

A

B

C

D

E

F

G

H

I

L

M

P

R

S

T