This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 2 minute read

Computerised facial recognition: revolutionary tech or surveillance threat?

Recognising someone’s face: it’s a skill that humans can do without thinking. But getting to a stage where computers can demonstrate this same ability is something that has been in development since the 1960s.

The adoption of this technology could be revolutionary, not least in policing, where it has application both in spotting potential criminal activity and investigating it retrospectively. However, its usage in this area also opens many troubling new questions.

When a specially-trained police officer checks CCTV camera footage, he or she will typically be looking for a particular person of interest and other faces will be naturally forgotten. However, when a computer scans multiple people, and checks their faces against a database of suspects, there is always the risk the data could be stored to track people without their knowledge. As a commentator in a recent New Yorker feature points out, in the worst case scenario this technology could be used to spy on a population on a massive scale. But at the very least, it effectively makes everyone in the database a suspect every time it’s searched.

UK police trials underway

The technology is already being trialled by police in the UK. This Christmas the Metropolitan Police force in London asked for volunteers to have their faces scanned so they could be used in a live experiment to pick out a suspect from multiple faces in the capital’s busy shopping streets and tourist areas.

The current intention is clear. Public safety could be better served by technology that can automatically spot if a suspect on a watchlist is acting suspiciously around a crowd of people. However, at the sametime, there are serious privacy and governance issues over processing people's faces without their permission through systems that could, in the wrong hands,be used for mass surveillance. 

This is coupled with concerns over the so-called “codedgaze”—that algorithms adopt and display the conscious and unconscious biases ofthe people who create and train them. Indeed, evidence suggests that computerised facial recognition systems already demonstrate divergent success rates when applied to the faces of people from different ethnicities.

Investigation underway

Hence, the Information Commissioner, Elizabeth Denham,announced last month her office is carrying out an investigation into facial recognition.

The Telegraph has reported the enquiry was prompted by her concern over the use of facial recognition technology by police at last year’s Notting Hill Carnival and Remembrance Sunday events. Using AI computer systems to scan multiple faces, she explains, “is a real step change in the way law-abiding people are monitored as they go about their daily lives.”

The investigation will be one to watch because it could offer much needed guidance on balancing the need for law enforcement organisations to protect the public, compared to citizens’ right to go about their everyday lives without being monitored.

Computer-vision systems potentially allow cops and employers to track behaviors and activities that are none of their business, such as where you hang out after work, which fund-raisers you attend, and what that slight tremor in your hand (recorded by the camera in the elevator that you ride to your office every morning) portends about the size of your future medical claims.

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.

Tags

ai, technology