Your Practical Guide to the EU AI Act and GPAI Guidelines

Since August 2, 2025, the provisions of the EU AI Act governing general-purpose AI (GPAI) models have officially come into force. With the simultaneous publication of the GPAI Guidelines, the European Commission has for the first time provided concrete clarity on how these rules apply in practice.

The guidelines help developers, deployers, and other actors along the entire AI value chain understand their responsibilities – and innovate with confidence within the European legal framework. This article provides a concise summary of the key rules and practical guidance for implementing them in your organization.

Two people sit at a table in a meeting room talking while a third person is blurred in the foreground.

What has been in force since August 2025

The EU AI Act takes a risk-based approach: the greater the risk posed by an AI system or model, the stricter the requirements. For GPAI models, the law distinguishes between two categories:

  • All GPAI models trained with more than 10²³ FLOP are subject to basic transparency and documentation requirements
  • GPAI models with systemic risk – those trained with more than 10²⁵ FLOP – are subject to significantly stricter obligations: mandatory model evaluations, incident reporting, and cybersecurity measures
Timeline showing key deadlines for the implementation of the EU AI Act from 2024 to 2027
EU AI Act implementation deadlines at a glance – GPAI rules in effect since August 2025

Importantly, the obligations of the EU AI Act do not apply only to providers of large foundation models such as OpenAI or Meta. Organizations that integrate or fine-tune GPAI models into their own products also bear responsibility – depending on the extent to which they modify the model.

What the GPAI Guidelines actually regulate

The GPAI Guidelines introduce clarity in three specific areas:

Calculation of training compute thresholds: The guidelines define how FLOP values are calculated and which public sources can be used for estimates – essential for determining whether a given model falls within the regulatory scope at all.

Documentation requirements: Depending on the model category and the actor's role, different documentation is required – from technical descriptions and training data summaries to risk assessments for models with systemic risk.

Distribution of obligations along the value chain: The guidelines clarify which obligations apply to which actor – depending on whether a model is merely used, integrated, or actively modified.

A 5-step guide for downstream providers

If your organization integrates or modifies GPAI models in AI products or services, these five steps can help you meet your compliance obligations:

1. Create a use-case inventory Maintain an up-to-date inventory of all AI systems incorporating GPAI models. Document licence information and the estimated training compute for each model to verify whether the thresholds of 10²³ FLOP (GPAI) or 10²⁵ FLOP (systemic risk) are exceeded.

2. Conduct a gap analysis and establish governance policies Based on the AI Act, the Code of Practice, and the GPAI Guidelines, determine which obligations apply to each model. Since most downstream providers do not train models from scratch, focus on obligations related to integration and modification.

3. Update use-case prioritization Integrating and fine-tuning GPAI models can trigger different levels of compliance obligations. When prioritizing use cases, consider: which model? How deeply modified? Which licence type?

4. Apply and monitor compliance policies consistently Ensure your policies are applied every time AI teams develop, integrate, or modify a GPAI model. Generate and retain evidence of compliance. Depending on your organization's risk tolerance, consider establishing multiple lines of defense: internal reviews, responsible AI oversight, and external audits.

5. Build AI competence across your workforce Define and deliver role-specific AI literacy training – from AI users and system developers to AI coordinators and leadership.

Compliance as a team effort

A common mistake: compliance is treated as a purely legal or technical task. In practice, implementing the EU AI Act requires close collaboration between three teams:

The governance team identifies legal obligations, assesses risks, and coordinates audits. The platform team translates these requirements into technical specifications, standardizes MLOps processes, and automates logging and reporting. The engineering team develops the AI systems and ensures they meet the defined standards.

Only when all three teams work with shared processes, clear documentation, and automated compliance checks does a scalable and robust compliance structure emerge.

From regulation to implementation

The GPAI Guidelines are an important step toward greater clarity. The real challenge, however, lies in putting them into practice: how do you integrate these requirements into existing processes? How do you ensure your teams work in a compliant way – without stifling innovation?

appliedAI helps organizations close exactly this gap: with GPAI-specific governance, technical support for compliant-by-design development, and role-specific AI literacy training.

Whether you are developing, integrating, or auditing AI systems – our goal is to help you move from regulation to implementation faster and with greater confidence.

Want to know which GPAI obligations apply to your organization? Talk to us – we're here to support you wherever you need us.