This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 3 minute read

First provisions of EU AI Act now apply: what you need to know about AI literacy

The first provisions of the EU AI Act started to apply this Sunday (2 February). Companies that provide or deploy AI systems are now required to ensure that there is AI literacy within their operations, and certain AI practices are now prohibited. 

This blog focusses on the AI literacy provisions – what they cover, and what you need to do (according to some new guidance) to comply. For more information on the provisions relating to prohibited AI systems, see our EU AI Act Briefing here and these new Commission guidelines.

What is AI Literacy?

The Act defines AI literacy as the skills, knowledge, and understanding that allow providers (i.e. those putting AI systems onto the market/into service), deployers (i.e. those using AI systems) and affected persons to make an informed deployment of AI systems. It should also allow them to gain awareness about the opportunities and risks of AI and possible harm it can cause. 

Article 4 of the Act requires companies that provide or deploy AI systems to take measures to ensure (to their ‘best extent’), a sufficient level of AI literacy of their staff and others dealing with the operation and use of AI systems on their behalf. In doing so, the organisation must take into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, as well as consider the persons or groups of persons on whom the AI systems are to be used. 

How can organisations ensure AI literacy?

A key question for organisations is what do the requirements around AI literacy mean in practice (i.e. what do they need to do to comply)? The Autoriteit Persoonsgegevens (the “AP”), the Dutch Data Protection Authority which is also responsible for regulating algorithms, has published guidance* that may help answer this. Unsurprisingly, the guidance stresses that there is no one-size-fits all set of measures that ensure an adequate level of AI literacy. Instead, when providing AI literacy training, organisations must take into account the people involved and the degree of risk of the relevant AI system. The size and available (financial) resources of the organisation will also be relevant factors to consider.  

The guidance proposes that organisations use a multi-year plan to promote AI literacy which follows a four-step process:

  • Step 1 - Identify. This involves:
    • making an inventory of all AI systems used within the organisation. An organisation’s GDPR processing register may act as a useful starting point for this; and
    • documenting the people and roles involved alongside information on their AI knowledge and skills.
       
  • Step 2 - Determine goals. Organisations should set AI literacy goals and priorities based on relevant risk levels. Not every employee needs to know the same information about a certain AI system, but those working with that system need sufficient knowledge and skills to know what the risks are and how the AI system they are using works. Other employees will not need this level of detail, but should still be aware that AI systems are being used (and why they are being used). 
     
  • Step 3 - Execute. Having set goals, it is important that these are executed by setting appropriate strategies and actions. AI literacy should be high on the agenda at all levels of the organisation, and developments should be monitored. For example, an organisation could create a “how do we deal with AI” document to raise awareness of AI literacy. Organisations (particularly larger ones) may also want to define responsibilities around AI literacy in concrete roles (for example, by appointing an AI officer).
     
  • Step 4 - Evaluate. Regularly analysing whether the objectives are being achieved allows organisations to determine new goals and measures (where necessary) to improve AI literacy and ensure the necessary standard is being maintained. For example, by conducting an annual survey among employees, an organisation can decide whether the measures being taken are producing the desired results. 

Comment

As the guidance says, “AI literacy is not an end goal, but a continuous process” and so it will be important for organisations to continually evaluate and refresh their AI literacy measures. This is particularly important given the fast pace of technological development in this space, which can create new opportunities and risks for organisations. 

It will also be interesting to see if more member state regulatory bodies provide guidance on how to comply with the literacy obligations, and organisations should monitor this in key jurisdictions. 

What next?

In terms of the AI Act more generally, while these provisions now apply, the majority of the Act will not apply until August 2026, with some provisions even starting after this date. However, the provisions on general purpose AI systems will apply this summer (from 2 August 2025 - subject to some exceptions) and various other codes of conduct and templates (see here) will be finalised.

For a more in-depth look at the EU AI Act, the other implementation deadlines and practical steps that companies should be taking to ensure compliance, see our detailed briefing here.

*note: the guidance is published in Dutch. 

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.

Tags

ai, digital regulation