RegImpact
eu ai acteffective· Published 7/30/2025

Overview of Guidelines for GPAI Models

On 18 July 2025, the European Commission published draft Guidelines clarifying key provisions of the EU AI Act applicable to General Purpose AI (GPAI) models. The Guidelines provide interpretive guidance on the definition and scope of GPAI models, related lifecycle obligations, systemic risk criteria, and notification duties for providers. Once translated into all EU languages, […]

What this rule actually says

The EU AI Act now has official guidelines explaining what counts as a "General Purpose AI" (GPAI) model—basically, an AI system that can be used for lots of different tasks, like GPT-4 or Claude. If you built or deployed one, the EU says you have new obligations: documenting how it works, monitoring for risks, and telling authorities if it could cause serious harm. The rules kicked in July 2025.

Who it applies to

  • You're in scope if: You're selling, distributing, or deploying a GPAI model (a foundational model that can handle multiple downstream tasks) to users in the EU or UK, even if the company is based elsewhere.
  • You're likely NOT in scope if: You're using someone else's GPAI model (like OpenAI's API or an open-source model) as a component in a specialized product. Using GPT-4 to build a medical scribe tool probably doesn't make *you* a GPAI provider—it makes OpenAI one.
  • Jurisdiction: This applies if you have customers in EU member states or the UK. It doesn't matter where you're incorporated.
  • In vs. out: The rules apply to the model itself. If you're fine-tuning an existing model for a specific use case (hiring assistant, support chatbot), you're probably operating in a gray zone—monitor this closely.

What founders need to do

  1. Audit your product (1–2 days). Ask: "Did I train a foundational model from scratch, or am I building on top of an existing one?" If you're the latter, you likely have minimal direct compliance burden.
  1. Document your model if you built it (3–5 days). Write down training data sources, known limitations, and potential failure modes. The EU wants transparency, not perfection.
  1. Set up risk monitoring (ongoing, 2–3 hours/week). Track user reports and edge cases where your model might cause serious harm (medical misdiagnosis, discrimination in hiring). Log these.
  1. Know the notification threshold (immediate). If your GPAI model poses "systemic risk" (causes widespread harm), notify EU authorities. This is rare and high-bar—don't assume it applies unless something actually breaks badly.
  1. Watch for translations and clarifications (ongoing). The official guidelines are still rolling out in all EU languages. Recheck in Q4 2025.

Bottom line

If you're building *on top of* existing GPAI models, monitor but don't panic; if you *trained* a foundational model, start documenting now—you have active obligations.