The Met Police will start using live facial recognition across London

Live facial recognition has been trialled for three years – now it's going to be used on a regular basis by the Metropolitan Police
Daniel Berehulak/Getty Images

The Met Police has announced that it will start using controversial live facial recognition systems as part of its regular policing in London.

The service has been trialling face matching technology on the streets of the capital since 2016 but will now significantly increase its use. "The use of live facial recognition technology will be intelligence-led and deployed to specific locations in London," the police force said in a statement.

The Met says that the roll-out of live facial recognition will start in places where it believes the technology can help "locate serious offenders" and help to tackle crime. It's believed the first use of the technology will begin within the next month. It has not yet been announced where the technology will be used. At this stage it is also unclear whether the cameras used for facial recognition will be permanently deployed to one location or moved frequently around the city.

Independent analysis, commissioned and since dismissed by the Metropolitan Police, found matches made using the Met's facial recognition systems were inaccurate 81 per cent of the time.

Met Police assistant commissioner Nick Ephgrave says that cameras will be "focused on a small, targeted area to scan passers-by" and that the cameras will be clearly signposted. As has been common in trials of the technology, police officers will hand out leaflets telling people about its use.

"The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR," the Met says. Between 2016 and 2019, the Met used facial recognition in 10 separate trials: including at the Notting Hill Carnival, in Leicester Square and at Remembrance Sunday commemorations taking place on Whitehall in November 2017.

So how do the systems work? Live facial recognition technology capture images of people's faces as they walk within range of cameras. In trials across London the cameras have been attached to police vans which are clearly marked to explain that facial recognition technology is being used. The cameras could also be attached to static poles.

Once a person has been detected by a camera a biometric map of their face is created and checked against an existing database of images. These databases are often called watchlists and can be individually created for different locations and policing purposes.

The Met says for each of its live facial recognition uses across London it will create "bespoke" watchlists for the people it wants to identify. These will be "predominantly those wanted for serious and violent offences," the force says. Across the UK, more than 20 million photos of faces are contained in police databases.

When the system makes a match it can then alert police stationed near the cameras. They are able to decide if the alert is correct and then track the person down on the ground. Police say their system only keeps images captured by the cameras when a match has been made or an arrest happens – in these cases they're kept for 31 days.

The rollout of the technology marks a significant expansion of surveillance technologies in London. Human rights groups have said the use of facial recognition systems is intrusive and has the potential to erode privacy in public places. The technology has also been found to be inaccurate in some of its uses.

The Met's rollout of the technology appears to be in contradiction of an independent study conducted by academics with access to the Met's systems. In July 2019, two academics from the University of Essex, Daragh Murray and Pete Fussey, published a report that raised serious concerns about the use of facial recognition in London. The researchers attended six trials of the technology where it had correctly identified people 19.05 per cent of the time. Or, to put it another way, it was inaccurate 81 per cent of the time when the system believed it had made a match.

The study was the most detailed report on the use of live facial recognition technology to date and involved interviews with Met Police officers and access to their systems. The researchers concluded it was "highly possible" that courts could decide the technology was unlawful and that it was likely to be "inadequate" under human rights laws.

Both academics said it is unclear why people were being added to the police watchlists, with a later definition of being "violent" included as a reason for including certain individuals. They also said people had been included on watchlists incorrectly. In one trial in Romford in 2018, a 15-year-old boy who was identified had already been through the criminal justice system.

The research also said it was not clear why police had picked some locations for facial recognition trials and there wasn't a simple way to avoid the technology. Tests near the Stratford shopping centre in East London required people to take an 18 minute detour if they didn't want to be scanned by facial recognition cameras. In another case seen by the researchers, people reading information boards about the technology were already in range of the facial recognition cameras.

The Met Police, which had commissioned the study, distanced itself from the findings. At the time of its release a spokesperson said the research had a "negative and unbalanced tone".

London's use of live facial recognition technology sets itself out from other major cities around the world. San Francisco banned the tech in May 2019. Elsewhere in the US, Oakland, Somerville, Brookline and San Diego have also either banned or halted trials of its use.

The UK's data protection regulator, the Information Commissioner's Office, has issued a statement on the wider rollout of the technology saying the Met has taken its advice on board but that it also wants to receive further information on how the technology will be used. "This is an important new technology with potentially significant privacy implications for UK citizens," the ICO said.

It also called on the government to introduce codes of practice for how live facial recognition should be used. In Wales, campaigners are appealing a court case, which they previously lost, that aimed to ban use of facial recognition technology by police. The legal case was refused by judges "on all grounds".

Privacy campaigners have argued for new rules around how facial recognition data is handled. "It is unjustifiable to treat facial recognition data differently to DNA or fingerprint data," parliament's science and technology committee chair Norman Lamb MP said after completing a review in May 2018.

Matt Burgess is WIRED's deputy digital editor. He tweets from @mattburgess1

This article was originally published by WIRED UK