This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 2 minutes read

Do you know what your AI is thinking? You may not like it

For an organisation seeking to implement an AI solution to improve its business processes (whether for internal use or customer-facing), Amazon's recently leaked foray into this area provides a good case study.

How much do you know about the training data you are using and any biases it may contain? For an unsupervised algorithm (where there is no human teacher confirming that an output corresponds to a particular input), the likelihood is that you may not know much - this type of algorithm is generally used to spot trends which a human may have missed. The problem is, the output of the algorithm will have taken on any biases indicated in the data.

This is less likely to be a problem in an AI solution intended to make a warehouse run more efficiently, or to reduce the electricity bill for your data centres.

But for any decisions being made about individuals, it is important that those individuals are not being unlawfully discriminated against on the basis of any protected characteristics, such as race or gender.

For Amazon, who were trying to use an AI solution to review applicants for software developer jobs to short-list for interview, their biggest problem was the historic bias in the data towards men, to the extent that their AI model was actively discriminating against any use of "women" in resumes.

Fortunately, Amazon were able to spot this issue before the algorithm was put to use, and were able to correct this. However, the fact that Amazon were not able to be sure of what other biases the AI may have picked up from the company's past decisions, meant that they gave up and scrapped the whole thing.

Although this may sound like a failure, it actually shows that Amazon had good governance procedures in place to try to spot some of the biggest issues facing businesses wanting to make use of unguided algorithms: bad data in, bad data out; watch out for bias; and can you audit what your AI is thinking?

Amazon could see some of the biases, but in the end the lack of transparency of the AI's reasoning in an area (recruitment) where bias and discrimination (even inadvertent) are major issues, meant they made an informed, risk-based decision not to go ahead.

If your business is considering an AI solution, do you know enough about its thinking to be able to make the same informed risk-based decision?

For more information on good governance and the responsible deployment of AI in business, please see our joint white paper with ASI Data Science available here.

The company was able to edit the algorithm to eliminate these two particular biases. But a larger question arose—what other biases was the AI reinforcing that weren't quite so obvious? There was no way to be sure. After several attempts to correct the program, Amazon executives eventually lost interest in 2017. The algorithm was abandoned. The incident shows that because humans are imperfect, their imperfections can get baked into the algorithms built in hopes of avoiding such problems. AIs can do things we might never dream of doing ourselves, but we can never ignore a dangerous and unavoidable truth: They have to learn from us.

Tags

ai