This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 5 minute read

New AI rules: simplified data laws to help AI

We have known for some time now that new AI regulation is on the horizon. To-date the focus has been on the EU’s plans, but this month the Government’s consultation on proposed changes to the UK’s data protection laws sets out some concrete suggestions for what changes we may see in the UK.

So what can we learn from the consultation?

The consultation is wide-reaching (see our blog UK GDPR 2.0), and a number of the changes discussed (for example, proposed changes to the application of data protection impact assessments) could be relevant to those developing and using AI. However, there is also a section on AI and machine learning which specifically looks at the following issues in the context of AI.

Fairness

The issue: We all want, or expect, AI systems to be fair, but fairness is broad and context-specific. Many different concepts of fairness exist in several legislative frameworks (the GDPR, Equality Act, Employment law etc.) and navigating these, and applying them in AI systems, is complex. The Government recognises that further uncertainty is caused by both a lack of (or at least limited) specific guidance on how practitioners can apply the concept of fairness to building and using trustworthy AI systems and a fragmented governance framework. There is a proliferation of initiatives with multiple bodies producing guidance to try to help fill the gaps in an organisation’s knowledge (an issue we discussed earlier this year in our Regulating Digital data podcast). This risks creating regulatory confusion for organisations given the potentially broad and overlapping legal definitions of fairness.

Suggested solution: The UK’s AI governance framework should provide clarity about an organisation’s responsibilities regarding fairness (fair data use, procedural fairness and outcome fairness) when developing and deploying AI systems. This issue will be addressed in more depth in the UK’s national AI strategy which will be published this year.

Building Trustworthy AI systems:

The issue: The Government is considering how to develop a safe regulatory space for responsible development, testing and training of AI, building on existing initiatives such as the ICO’s regulatory sandbox. As part of this, it is looking at how organisations can use data more freely, for example to train and test AI (although this must obviously be done responsibly, for example in line with the OECD principles on AI).

Suggested solution: Proposals in the consultation include:

  • Legitimate interest: Allowing organisations to use the legitimate interest ground to process personal data without having to apply the usual balancing tests where the processing is for a limited number of reasons, one of which would be to ensure bias monitoring, detection and correction in AI systems.
  • Sensitive personal data: Clarifying that where the use of sensitive personal data is needed to achieve the bias monitoring, detection and correction, organisations may be able to rely on an existing derogation in the Data Protection Act 2018 (the derogation in Para 8 of Schedule 1 talks about ‘identifying or keeping under review the existence or absence of equality of opportunity or treatment of [specified vulnerable] people’); or creating a new condition in Schedule 1 to specifically address this point.

Automated decision-making and data rights:

The issue: There are specific rules in the GDPR around decision-making processes that rely on AI technologies (for example, those that sift through loan and job applications). They give individuals the right not to be subject to a decision that has legal or ‘similarly significant’ effects based solely on automated processing, including profiling, unless an exemption applies (Article 22 GDPR). The rules further protect individuals by requiring organisations to give specific information to the individuals about the processing, take steps to prevent errors, bias and discrimination and give individuals the right to challenge the decision and request a ‘human’ review of it. However, as can imagined, there is some confusion around both when and how to apply the rules:

  • When do the rules apply? Most decision making has some human involvement, even if it is only superficial, and so where do you draw the line? Also, what is a ‘similarly significant’ affect?; and
  • How in practice can they be followed? CDEI research on the use of algorithmic tools in recruitment suggests that organisations who may be screening thousands of applications are not clear on how they can offer a human review of them all.

Suggested solution: There is not one yet – the Government is using the consultation to gain further evidence on whether legislative change is needed. It is seeking views on potentially clarifying and limiting the scope of Article 22 and also on whether respondents agree with a Taskforce recommendation to remove Article 22 completely and allow the use of solely automated AI systems in the UK on the basis of legitimate interests or public interest.

Public trust in the use of data driven systems:

The issue: Public trust in data is key, but the way many AI systems operate (collecting very sensitive attributes about a person without identifying them - so-called ‘soft biometric data - or making inferences from group-level characteristics which may be biased) can detrimentally impact that trust. The law already contains various transparency obligations and tools to help but these have limitations. For example, data protection impact assessments can help organisations assess issues of fairness ‘where the processing may give rise to discrimination’ (as suggested in AI guidance) but it is not clear if they are the best vehicle to address these issues. They are also one of many impact assessments that organisations may be carrying out (such as equality impact assessments and the emerging market for algorithmic impact assessments).

Suggested solution: The consultation is seeking views on the effectiveness of current tools, provisions and definitions to address profiling issues and on whether legislative changes are needed. It also:

  • asks whether the data protection framework is the right legislative framework to manage this; and
  • confirms that work is currently underway by the CDEI, and as part of the National AI Strategy, to assess the need for broader algorithmic impact assessments.

The consultation also points to useful guidance which already exists, such as the ICO’s AI Auditing Framework and Project Explain guidance and the Government Digital Service’s Understanding AI Ethics and Safety guidance. 

Comment:

In the ministerial foreword to the consultation, Oliver Dowden (then Digital Secretary) promises that the government’s plans will result in “simplifying data use by… developers of AI and other cutting-edge technologies.” The consultation does suggest that the UK Government both recognises some of the issues with applying current data protection (and other) rules to AI development and implementation, and is open to making legislative changes in this area. It is less clear, however, whether future changes will be able to provide practical and workable solutions for businesses who are already grappling with a multitude of guidance as well as sector and data regulation. We will also need to wait for the National AI Strategy (which the consultation says is expected later this year, but which press reports suggest may be later this week) to understand the full regulatory landscape.

The reforms outlined in this consultation will strengthen our position as a science superpower, simplifying data use by researchers and developers of AI and other cutting-edge technologies (Oliver Dowden, Secretary of State, DCMS)

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.

Tags

ai