This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 4 minute read

Long awaited GPAI guidelines published ahead of 2 August deadline

Following hot on the heels of the general-purpose AI (GPAI) Code of Practice (see our blog), the Commission has finally published its guidelines to clarify the scope of the obligations for providers of GPAI models under the AI Act. These provisions apply from 2nd August 2025. 

Although the guidelines are non-binding, they explain how the Commission interprets key terms in the AI Act and will guide enforcement. In particular the guidelines are designed to help organisations along the AI value chain answer the following questions: 

  1. Is my model a GPAI model? 
  2. Could I be a provider placing a GPAI model on the market (e.g. if I modify a third party GPAI model)?
  3. Am I exempt from the GPAI obligations (i.e. can I benefit from the open source exemptions)?
  4. What can I expect regarding the Commission’s enforcement of compliance with these obligations, in particular during the period immediately after 2 August?

 In this blog, the first in our series, we look at the first two of these questions. 

1. Is my model a GPAI model? 

The Act’s definition of a GPAI model includes a list of determining factors. For example, a GPAI model must display significant generality and be capable of competently performing a wide range of distinct tasks. However, the guidelines provide an indicative criterion to help decide whether or not a model is a GPAI model. The criterion is based on the amount of computational resources used to train the model (measured in FLOPs) as well as the modalities of the model. The Commission acknowledges that training compute is an “imperfect proxy for generality and capabilities”, but considers it to be the most appropriate approach to take at present. 

The criterion thresholds set in the guidelines state that: 

  • the model’s training compute must be greater than 10^23 FLOPs. This was increased from the original proposal of 10^22 FLOPs following public consultation, and is quite typical (currently) for models trained on large amounts of data; and
  • the model must also be able to generate language (whether in the form of text2 or audio3 ), text-to-image or text-to-video. These modalities were selected as AI models that generate language “are typically more capable of competently performing a wider range of tasks than other models.”

Note: despite these thresholds, the wording in the AI Act’s definition is still relevant. For example, the guidelines confirm that if a model does not display significant generality or only performs a narrow range of tasks, it will not be a GPAI model even if it meets the above criterion. 

The guidelines go on to discuss what is meant by an AI model’s lifecycle, when a GPAI model will fall within the systemic risk category and how a provider can contest a classification of systemic risk. They also provide guidance on how to understand training compute. 

2. Could I be a provider placing a GPAI model on the market?

The guidelines provide further guidance on the concepts of “provider” and “placing on the market,” looking in particular at when someone further down the AI value chain may be caught by the GPAI model provider obligations. For example, the guidelines discuss how organisations may: 

  • integrate a GPAI model into their AI System: here the organisation is unlikely to have to comply with the GPAI rules, although it will need to comply with any relevant rules for providers of AI Systems; and 
  • modify a GPAI model for their own purposes: here, both the organisation (the downstream modifier) and the original provider may need to comply with the GPAI rules. However: 
    • not every modification will mean the downstream modifier becomes a provider. The modification has to lead to a significant change in the model’s generality, capabilities, or systemic risk. The guidelines set out an indicative criterion to help determine this – namely the training compute used for the modification must be greater than a third of the training compute of the original model. Where the downstream modifier cannot be expected to know, or estimate, this value then the guidelines contain some basic thresholds that can be applied: 
      • if the original model is a general-purpose AI model with systemic risk, the threshold should be replaced with a third of the threshold for a model being presumed to have high-impact capabilities (i.e. currently 10^25 FLOP, as set out in Article 51(2) AI Act); and 
      • if the original model is not high-impact, the value should be a third of the threshold for a model being presumed to be a GPAI model (i.e. currently 10^23 FLOPs, as set out in the guidelines and discussed in Q1 above).
        The idea here is that a model modified with this amount of compute will display a significant amount of change which warrants the downstream modifier taking on their own responsibilities. The threshold is currently set quite high with few modifications expected to meet it, although the Commission expect more downstream modifiers to be caught in the future. 
    • The downstream modifier will only be a provider in relation to the modification (as set out in Recital 109 AI Act). For example, it will need to provide documentation (under Article 51 AI Act) but this is limited to information on the modification. 
    • If modifications are made to a GPAI model with systemic risk in such as way that the downstream modifier becomes a GPAI provider, then the resulting (modified) model is presumed to also have high impact capabilities and the downstream modifier will have to comply with the obligations of a provider of a model with systemic risk.  

While these guidelines provide some helpful clarifications, there is little time for organisations to digest their content before the GPAI rules start to apply. In our next blog in this series, we will therefore discuss how the Commission will approach enforcement of the GPAI rules. 

 

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.

Tags

ai, digital regulation