This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 3 minutes read

Accountability and technology: imagining the future of the SM&CR

The Senior Managers and Certification Regime or 'SM&CR', a regulatory lever which seeks to foster greater individual accountability within (most) regulated firms, and which takes aim at the conduct failings that fuelled the 2008 banking crisis, is set to be reviewed in Q1 2023 in the wake of the Edinburgh Reforms of financial services.

With the SM&CR under the microscope, it is timely to consider how advances in technology could redefine how we conceive of individual conduct, competence and accountability. In particular, this post considers (sometimes speculative) developments in the areas of AI and neurotechnology, and how these might influence the design of future regulation.

AI

Regulators are already consulting on whether the SM&CR could be used to mitigate some of the data, governance and model-related risks introduced by the use of AI in financial services—for instance, through creating a new prescribed responsibility for AI. This approach can be thought of as accretive, broadening the reach of the SM&CR, as opposed to altering its underlying assumptions. We reflect further on this proposed extension of the SM&CR in our video 'AI in Financial Services: Facing the Future'. 

But to a longer timetable, the relationship between AI and accountability takes on a new texture should the cognitive ability and perception of AI systems come to mirror or exceed that of humans (known as artificial general intelligence or 'AGI', as we detail in our white paper 'Superhuman Resources'). In this theoretical context, it is reasonable to ask whether AI should be regulated in a manner similar to human beings under the SM&CR, with attendant expectations of competence and accountability. And indeed, regulators should likely give thought to regulating (and potentially prohibiting the use of) particular forms of AI well before AGI is reached, rather than attaching sole accountability to the firms and individuals who use it. 

This perspective marks a sharp departure from the current, largely technology-neutral approach to regulation, and challenges the SM&CR's assumption that humans alone are responsible for firm culture and decision-making. It also raises questions about the accountability of developers which are already being considered in a number of contexts, including that of distributed ledger technology (as we dig into in our article 'Leaps of Faith: Searching for Accountability in a Trustless Environment').

Neurotechnology

Neurotechnology, where electronic devices are connected to and interact with to the nervous system, could present varied challenges for the SM&CR. A paper on neurotechnology authored by Dr Allan McCay and published by The Law Society in August 2022 observes that neurotechnology can both monitor and record neural activity, and influence it. With potential medical, military and commercial applications—some of which are already in evidence, some of which are highly speculative—the paper traces the possibility that neurotechnology could (among other things):

  • enable individuals to augment themselves by "'downloading' new skills and knowledge" and gain "mental capacities that are beyond the normal range";
  • monitor attention and alertness; and
  • open the door to hackers "hacking brains", causing an individual to "act impulsively or perhaps one day even to experience a particular hallucination". 

Should these applications emerge, and should the adoption of neurotechnology become widespread (both of which are pretty significant 'shoulds'), the SM&CR's core assumption that individuals can be held accountable for their actions is tested. What happens when an individual's reckless or impulsive behaviour can be attributed to their augmentation by neurotechnology, or from being hacked? How is this line drawn? 

Neurotechnology also hints at the possibility of more invasive modes of regulation: could senior managers be required (or feel pressured) to 'download' particular skills and knowledge as a prerequisite to regulatory approval? And, in a much closer future, could the monitoring of attention become part of the regulatory toolkit, a means of supervision? 

Much of this appears far-fetched to us today. But, as noted by Dr McCay, "neurotechnology is likely to have an increasing impact on society and thus on the law", and it is an area that is receiving increasing attention. Most notably, the ICO is expected to publish a deep dive report on neurotechnology this spring, which we look forward to with interest. The proliferation of neurotechnology may be all the more plausible if, as suggested in the paper, "human augmentation...might be a way for some to handle the economic disruption brought on by AI". In this way, AI and neurotechnology may themselves interact, with significant consequences for the SM&CR and broader society.

For more information on the Edinburgh Reforms, see this briefing from my colleague Selmin Hakki which was published in December 2022. 

the relationship between AI and accountability takes on a new texture should the cognitive ability and perception of AI systems come to mirror or exceed that of humans

Tags

fig, ai, data, emerging tech, fintech