This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 1 minute read

Why diversity and inclusion are matters of fairness: understanding regulatory expectations through the lens of product and service design

My colleague, Ben Goldstein, wrote about why diversity and inclusion are regulatory issues following a number of high profile speeches at the UK Financial Conduct Authority (FCA).

Nikhil Rathi (CEO of the FCA) explained that a lack of diversity at the top “raises questions” about a firm’s ability to understand the different communities it serves, showing that the FCA is making a direct link between diversity and inclusion and the principle of “treating customers fairly” (TCF). If the FCA does not see improvements in diversity at senior levels and receive “better answers” from firms, it says it may exercise its regulatory powers.

This approach is influenced by the FCA’s extensive research on the financial lives of UK consumers. As of October 2020, 53% of adults in the UK exhibit “characteristics of vulnerability” and so are at greater risk of harm. Adults in Black, Asian and minority ethnic groups are disproportionately represented among the growing number of vulnerable consumers and women are more likely than men to have a vulnerability characteristic.

But how could the FCA hold a firm accountable for failings relating to diversity and inclusion? A key consideration will be upholding TCF principles, including new guidance on the fair treatment of vulnerable customers.

Focusing now on product and service design - firms are reminded to consider the needs of vulnerable customers in their target market and customer base (i.e. “inclusive design”) and to assess the likelihood of any inadvertent side effects when developing products and services - particularly relevant when using complex technology, like artificial intelligence (AI) systems.

The Information Commissioner’s Office (ICO) recently explained there is a risk of AI systems treating people less favourably on the basis of protected characteristics. For example, a machine learning (ML) system used to approve loan applications could have an algorithm that is biased, giving women lower credit scores and resulting in fewer loans being approved. To avoid this, firms should stress-test ML systems in a range of environments and consider the possibility of discrimination arising at the outset so appropriate safeguards are in place:

  • Is the data in the ML system sufficiently reliable and representative of the customer base?
  • Can the firm explain how the ML algorithm operates?
  • Who is responsible for monitoring the ML system so discriminatory patterns are eliminated?
  • Did the firm consult with advisors and stakeholders to better understand the risks?

Ultimately, the end-goal for regulators here is clear: diversity and inclusion within a firm is necessary to ensure fair and appropriate outcomes for customers.

If the FCA does not see improvements in diversity at senior levels and receive “better answers” from firms, it says it may exercise its regulatory powers.

Tags

fig, ai