On November 5, 2025, the European AI Office convened the first plenary meeting to launch the Code of Practice on Transparency for AI-Generated Content, a cornerstone initiative under Article 50 of the EU AI Act. For the next seven months, this multi-stakeholder process will bring together providers, deployers, standards bodies, researchers, civil society, and solution vendors to co-design the practical backbone of transparency in the age of generative AI.
Credo AI is proud to join as an active member of the working groups, contributing our expertise in AI governance, assurance, and standards to ensure that transparency under Article 50 becomes more than a principle; it becomes operational, testable, and trustworthy by design.
Why does this Code matter and Why now?
The transparency obligations under Article 50 exist for a simple but powerful reason: people deserve to know when they’re interacting with AI. That includes when content is AI-generated or manipulated (such as deepfakes), when AI informs the public on matters of importance, and when emotion recognition or biometric categorization is used. These duties aim to protect individuals from deception, fraud, impersonation, and manipulation, effectively preserving integrity in the digital information ecosystem. They complement the broader framework of the AI Act, which governs high-risk AI systems and general-purpose AI (GPAI) models.
But regulation alone isn’t enough. To make these rules workable across diverse formats and mediums (text, image, video, and audio) the ecosystem needs shared technical methods and interoperable standards. That’s where this CoP comes in: a voluntary, multi-stakeholder instrument that, if endorsed by the Commission, will become a recognized route to demonstrate compliance with Article 50(2) and 50(4).
It’s the bridge between policy aspiration and technical implementation and a critical signal to citizens that AI-generated content can be transparent, accountable, and verifiable.
What exactly did the Plenary set in motion?
The kick-off plenary launched a seven-month drafting process (November 2025–June 2026), structured around two dedicated working groups aligned with Article 50’s two pillars of responsibility:
- Working Group 1 – providers: tasked with defining marking and detectability obligations. Providers must ensure that AI outputs (audio, image, video, or text) are machine-readable and detectable as artificially generated or manipulated. The Code will set out expectations for technical feasibility, interoperability, and robustness, accounting for different media types, cost considerations, and the state of the art.
- Working Group 2 – deployers: focused on disclosure obligations when content constitutes a deepfake, or when AI-generated or manipulated text informs the public on matters of public interest. Deployers must clearly disclose AI involvement unless the material is subject to human editorial responsibility.
Both working groups will also address cross-cutting issues under Article 50(5) (particularly, how natural persons are informed when interacting with AI) and explore horizontal collaboration across the AI value chain. Each group is led by independent chairs and vice-chairs, experts spanning AI security, digital forensics, law, and media integrity, ensuring a multidisciplinary balance between technological precision and societal context. In parallel, the European Commission will develop Article 50 Guidelines (expected Q1 2026 - also stipulated in Article 96(1)(d) of the AI Act) to clarify scope, terminology, and exceptions, ensuring consistency between the Code, the Act, and related EU frameworks such as the DSA, NIS2, and Cyber Resilience Act.
Importantly, the transparency obligations themselves become applicable on August 2, 2026, giving organizations less than a year after the Code’s completion to operationalize compliance.
From Concepts to Controls: What ‘Transparency by Design’ means in practice
The Code will not dictate a single method but rather define a portfolio of complementary techniques. In other words, a practical menu of ‘transparency primitives’ that providers and deployers can implement, alone or in combination, depending on their system architecture and risk profile. Among the tools under discussion:
- Watermarks – Embedded, machine-detectable signals resilient to common transformations (resizing, compression, or re-encoding).
- Provenance metadata – Cryptographically signed manifests that travel with content, enabling downstream verification of its origin, generation, and modification history.
- Cryptographic signatures & attestations – Anchoring authenticity and enabling tamper-evident chains of custody across platforms.
- Logging & fingerprinting – Supporting traceability, auditing, and forensic detection in a privacy-preserving way.
Beyond the technical layer, the Code will define effectiveness criteria (e.g., precision/recall for detectors, robustness to manipulation), interoperability requirements, and usability standards for disclosures, ensuring that transparency is comprehensible to humans, not just readable by machines.
Privacy and proportionality will remain core design tenets. The Code is expected to emphasize data minimization, protection of personal and proprietary information, and tiered access to logs or provenance chains.
Notably, the framework will aim for cost-effective and SME-accessible solutions, preventing transparency compliance from becoming a luxury afforded only by large providers, which also serves as a key objective of the EU’s upcoming Digital Omnibus.
How This Code Complements the GPAI Code of Practice
This new Code builds on the foundation laid by the GPAI Code of Practice, which focuses upstream on model inputs, documentation, and safety governance (e.g., training-data summaries, model cards, red-teaming).
By contrast, the Transparency Code of Practice operates downstream on the outputs that reach people. It ensures that synthetic or manipulated content is detectable, labeled, and disclosed appropriately.
Together, these two Codes define the full lifecycle of trust:
- The GPAI Code ensures models are built responsibly.
- The Transparency Code ensures their outputs are recognizable and accountable.
Credo AI’s engagement in both initiatives reflects our commitment to making trust operational, essentially linking governance controls from model development to end-user impact.
Credo AI’s Contribution: From Principle to Proof
As an EU AI Pact co-signatory and active contributor to the GPAI Code of Practice, Credo AI brings a governance lens grounded in measurable assurance. In the Transparency Code working groups, we are focused on four main contributions:
- Assurance Patterns, Not Aspirations: Translating transparency duties into testable control sets, evidence requirements, and metrics artifacts that regulators, auditors, and enterprises can actually use. Our objective is to help build reference conformance profiles and testing checklists to make compliance verifiable.
- Interoperability by Default: Advocating for signals and metadata formats that work across the value chain (from model providers to content platforms) aligned with emerging standards (e.g., C2PA, and ISO AI assurance frameworks, inter alia).
- Privacy-respecting Transparency: Promoting data minimization, secure signing workflows, and controlled-access logs to ensure that transparency safeguards users without compromising IP or privacy.
- SME-ready Implementation: Helping design modular, open toolkits so SME providers can achieve transparency without disproportionate cost or complexity.
From Policy to Practice: Building Trust at Scale
During the plenary, the AI Office outlined key principles:
- Objective: transparency to counter manipulation, misinformation, fraud, and deception.
- Complementarity: Article 50 complements high-risk and GPAI provisions.
- Inclusivity: development through an open, stakeholder-driven process.
- Voluntariness: adherence to the Code is optional but recognized as a demonstrable pathway to compliance.
- Parallel guidance: legal Guidelines will clarify concepts, scope, and exceptions by Q1 2026.
Credo AI will advocate for evidence-based implementation artifacts (profiles, test harnesses, and assurance templates) that allow organizations to prove, not just claim, compliance.
The seven-month drafting process will continue through May 2026, culminating in a final plenary to validate the Code before submission to the Commission. Once approved, it will offer a trusted pathway to compliance for Article 50 transparency obligations, well ahead of the August 2026 enforcement date. For organizations building or deploying generative AI in Europe, the message is clear: start designing for transparency now. Integrate provenance and signing into your pipelines, plan disclosure UX for your users, and define measurable success metrics for transparency effectiveness.
Credo AI will continue to work at the intersection of policy, standards, and implementation, helping organizations turn regulatory requirements into governance systems for trust.
If your teams are developing or deploying generative AI systems in Europe, let’s talk. We can help you implement Article 50 transparency-by-design in ways that are interoperable, auditable, privacy-respecting, and future-proof.
DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.





