While there has been much focus in the UK on the impact of the EU’s AI Act and delays to the publication of an AI Bill, a new law was passed this summer which will impact AI use and development. The Data (Use and Access) Act 2025 (DUA), which passed into law on 19 June, amends the UK’s data protection laws (amongst other things). While not an AI-specific law, a number of its provisions seek to provide a more favourable regime for innovation, including certain AI activities within the UK. These include its provisions on automated decision making, scientific research and the new duties of the UK data regulator (currently the Information Commissioner’s Office (ICO)).
Automated decision making
Automated decision making (or ADM) is the process of making a decision by automated means without any human involvement. An increasing number of organisations are using AI to help with their ADM. Where such use of AI involves decisions being made about individuals which have a legal or other significant effect on them (for example, where AI alone makes decisions in a recruitment process) then the UK GDPR ’s strict ADM rules apply.
The UK GDPR’s current rules provide individuals with a “right not to be” subject to ADM unless:
- their explicit consent has been obtained; or
- it is necessary for entering into or performing a contract between the organisation and the individual; or
- the ADM is required or authorised or by law.
Relaxation of the rules
DUA will significantly relax this position. It will allow organisations to carry out ADM in reliance on legitimate interests (or other UK GDPR legal bases) in most cases, removing the requirement for consent.
However, the Government has been careful to balance these pro-business relaxations with protections for individuals. For example, the changes do not apply where ADM involves the processing of special category data.
In addition, existing UK GDPR safeguards for individuals regarding ADM have been preserved and brought together in one new section of the legislation. They require organisations carrying out ADM to provide individuals with information about automated decisions. Organisations must also enable individuals to make representations and obtain human intervention in relation to such decisions and to contest them.
Additional clarification and guidance
DUA also attempts to clarify some terminology in the ADM rules which was said (in a Government impact assessment) to have hampered their application. For example, the ADM rules only apply to ‘solely’ automated decisions and DUA clarifies that these are decisions that lack “meaningful human involvement”.
At a recent conference, Government representatives emphasised the importance of this clarificatory wording. It is, however, largely a codification of existing guidance (from the ICO and EDPB) and so arguably does not move the dial much on understanding. The ICO has promised to publish new guidance on ADM in light of DUA’s changes, as part of its new AI and biometrics strategy and this guidance, expected in spring 2026, may well provide more helpful colour on these questions.
DUA also contains a regulation making power to enable the Government to issue further clarification on this issue.
Scientific research
The UK GDPR provides some adjustments and exemptions to its rules where processing is taking place for the purposes of ‘scientific research’. The question for many AI developers is therefore whether their AI research can benefit from these rule relaxations.
DUA confirms that commercially funded, private sector research (which much AI research will be) may fall within the definition of scientific research, as long as the activities can “reasonably be described as scientific”. On one hand, these changes are just another codification of existing ICO guidance. They may, however, provide organisations carrying out AI research with additional clarity and confidence that the exemptions that apply to scientific research may apply. DUA also expands some of these relaxations, including in relation to transparency and purpose limitation.
There was some debate during DUA’s passage through the House of Lords about whether AI research should potentially fall within the definition of ‘scientific research.’ The Lords had attempted to introduce a requirement that such scientific research must be in the public interest, thereby restricting when the research provisions would apply. However, the Government rejected this addition. As such, while this continues to be a complex area, the Government has left the door open for AI developers, particularly those operating at the frontier, to take advantage of the UK GDPR’s scientific research rules.
Pro-innovation data regulator
DUA confirms and bolsters the ICO’s pro-innovation stance by including new duties for the Information Commissioner to consider innovation and competition in carrying out their functions (for example their approach to enforcement).
This builds on the current approach - the ICO has positioned itself as a pragmatic and pro-growth regulator for some time (see this blog), and the UK Government also made it clear in their AI Opportunities Action Plan that they expect regulators like the ICO to support AI innovation (see blog).
We talk more about the impact of DUA on the ICO’s pro-innovation stance in this podcast.
Comment
While DUA’s changes are fairly moderate, one of the key policy drivers for the legislation was to reform areas of data law where a lack of clarity was “impeding the safe development and deployment of some new technologies”. The resulting reforms and clarifications within DUA, particularly around ADM, should provide a streamlined regime for at least some AI processing. Perhaps, more significantly, they indicate that the UK regulatory regime is focussed on facilitating growth.
For more detail on DUA’s changes more generally, see our blog.