The Met Police’s facial recognition tests are fatally flawed

The last weekend of August, 2016, marked a huge change in policing in London. For the first time, the Metropolitan Police used facial recognition technology to scan people walking the streets of Notting Hill during its annual carnival.

Since then the tech – comprising of cameras on vans or in fixed positions coupled with pattern recognition algorithms – has been deployed by police nine other times in the UK’s capital. The use of the system has never been fully evaluated by the Met – its public statement about the tests say its own analysis of its performance will be published in the future.

However, the first independent study (PDF) of the London force’s use of facial recognition has been completed. And it doesn’t paint a positive picture. Initially reported on by Sky News and The Guardian, the 128-page evaluation is a damning review of the technology’s serious failures.

The use of live facial recognition (LFR) by the Met is deemed “highly possible” to be called unlawful by the courts and is likely to be “inadequate” under human rights laws. (A lawsuit against the Met’s facial recognition use was launched in July 2018; a similar case against South Wales Police is waiting on a decision from judges). Daragh Murray and Pete Fussey, both academics at the University of Essex, analysed the use of the technology and raised serious concerns about how it was being deployed, who was being added to watchlists, plus whether its use was disproportionate.

During six facial recognition trials attended by Fussey and Murray, thousands of people passed in view of the cameras watching people’s faces. From this 46 potential matches were flagged by the tech. After reviews by police in control centers and on the ground only eight were demonstrated to be correct identifications. The system was right 19.05 per cent of the time.

The two researchers were given access by the Met Police to the trials, interviews with key officers, and documents about the facial recognition system. It is the most comprehensive analysis of the system to date. Before the report was published a draft was provided to the Met for it to address any factual inaccuracies and criticisms. The force refused to do so but has since told Sky it was unhappy with the “negative and unbalanced tone” of the findings.

Facial recognition’s bias and accuracy issues have been well documented. Most recently, gender and racial bias has been found in Amazon’s technology. However, the Essex academics also highlighted lesser-known problems with the Met’s trials.

There’s little idea of who is being looked for

Facial recognition systems attempt to identify people based on watchlists. These are databases of photos (and other personal information, such as names) that images captured by cameras are compared to. If there’s enough of a match between an image taken in real-time and a file held on a watchlist then police officers will be alerted to a potential match.

However, it’s not clear who is put on the Met’s watchlists. For each of the ten uses of facial recognition between 2016 and 2019 there were separate databases created, varying in size. At a Rememberance Sunday parade in November 2017 there were 42 people on the Met’s watchlist; in Romford, east London, in January 2019 there were 2,401 people on the database.

The researchers says it isn’t clear why people are included in the watchlist databases. “The condition of being ‘wanted’ was consistently stated as a criterion for being enrolled on a watchlist,” they write, adding the definition of ‘wanted’ is ambiguous. “Those included on the watchlist thus apparently ranged from individuals wanted by the courts to those wanted for questioning, across a range of different offences.”

When they asked police officers about who was on the watchlist for two trials near the Westfield shopping centre in Stratford in 2018, the answers were varied. The pair write: “Violence was almost always stated as reason for an individual’s presence on a watchlist. Yet this was regularly supplemented by reference to additional ‘other’ undefined offences or factors of ‘local interest’”. As facial recognition tests continued at the start of 2019, the researchers say the Met redefined the definition of wanted to be specifically about violent offences.

Data isn’t up to date

Police databases can be huge and unwieldy, with records constantly needing updating. During the Met’s tests of facial recognition the researchers found people being included on the watchlists who had already been dealt with by the courts and weren’t wanted for the offences listed in the database.

In Romford in 2018, one 15-year-old boy was correctly identified by the facial recognition cameras but the stop was a waste of time. “This was a verified correct match but he had already been dealt with by the criminal justice system in the time between watchlist compilation and the LFR test deployment, and so should not have been included on the watchlist under the stated criteria for enrolment,” the researchers write.

In another case observed by the the Essex academics a person wanted for serious violent offences was stopped. When officers asked about the cases they found the information was out of date and that the person stopped had already been through the legal system. He was, however, wanted for a lesser crime of malicious communications. “It is unlikely this more recent offence would have been sufficiently serious to be included in the initial watchlist,” the researchers say.

There are also questions about the amount of time it takes to create police watchlists. Information can be included from the Police National Computer, the Met’s Emerald Warrant Management System and other databases of Met intelligence, including Crime Record Information Systems and custody image databases. The researchers say creating the watchlists were a manual task that involved staff cross checking information from the differing systems and leaving some watchlist entries incomplete – including missing ethnicity information for some people.

“Ensuring accurate and up-to-date information from across these different data sources posed a significant challenge,” they write. “Such difficulties made compliance with overall standard of good practice complex”.How to hack your face to dodge the rise of facial recognition tech

The locations of facial recognition aren’t obvious

The two trials in Stratford, near the Westfield Shopping Centre, happened in high footfall areas where thousands of people were passing through. In most facial recognition tests this has been the case. South Wales Police has used its tech at football matches and music concerts.

However, it’s not clear why the Stratford location – and others – were picked. Fussey and Murray say officers told them most of the crimes in the area of London happened “some distance” away from Westfield Shopping Centre. “For this test deployment, intelligence and statistics were taken from surrounding areas and not the site of the test deployment itself,” they write, questioning the decision to use the tech in that location.

There was no simple way to avoid the tech

In Romford, at the end of January 2019, one man was fined £90 for a public order offence after he became aggressive with police officers when they questioned him for hiding his face from the facial recognition cameras. The Met’s publicity around its use of facial recognition says it hands out leaflets to people near the trials, officers talk to members of the public and posters are put in place to explain what is happening.

The researchers say the option for people not to give their consent to have their faces scanned was limited. They say people would had to make the choice very quickly and in cases where they tried to avoid the cameras could be treated as being suspicious. On a handful of occasions, police made arrests after chasing people who actively avoided the cameras. Six arrests in Romford were credited to the presence of the facial recognition vans but not their direct use.

In some cases, it was possible for people to cross the street to avoid the cameras but in one incident an 18 minute walk to avoid cameras was required. In another case seen by the researchers, people reading information boards were already in range of the facial recognition cameras.

It’s not clear what the point of the trials were

“We’re trialling the new technology to find out if it’s a useful policing tactic to deter and prevent crime and bring to justice wanted criminals,” the Met says on its website. It has consistently said the use of the systems are trials or tests. But the purpose of them is disputed, as it’s claimed there’s no obvious way to assess how useful to police it has been.

“A review of the documents made available to the authors results in the clear conclusion that the LFR test deployments were not set up solely and specifically as a research initiative,” the researchers write. They argue the use to identify people who are wanted in real-time takes it beyond a research project that tests the technology.

“Although a clear plan appeared to be in place to evaluate the technical performance of LFR technology, it is not clear how the assessment of LFR as a policing tactic was to be conducted,” they say. “It is unclear how the manner in which the test deployments were conducted could contribute to answer questions such as the impact of LFR on communities, or how LFR could be effectively incorporated into policing activities.”

All Rights Reserved for Matt Burgess

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.