Why we should be alarmed at Garda plans for facial recognition technology

Several US cities banned technology in policing when analysis indicated, at best, 72% accuracy rate with higher rate of false positives for people of colour

Facial recognition technology is back on the policing agenda and with it an alarming normalisation of mass visual surveillance. Photograph: iStock

Facial recognition technology is back on the policing agenda and we should not be complacent about what its use means: an alarming normalisation of mass visual surveillance.

Despite recently-touted increases in accuracy of identification – a double-edged sword if ever there was one, because precise identification of anyone and everyone means a sweeping loss of a basic right to everyday privacy – facial recognition still uses an imperfect process that may precisely identify you, or misidentify you at the risk of anything from exasperating inconvenience to life-threatening danger.

Several US cities banned the use of the technology for policing when previous analysis had indicated, at best, a 72 per cent accuracy rate with a higher rate of false positives for people of colour. A new study by the UK’s National Physical Laboratory (NPL) claims 89 per cent accuracy and said that, at some settings, there were statistically insignificant discrepancies for race and gender.

At the default setting, one in 6,000 identifications would be a false positive. That still would have serious, unwanted impact at the scale of real populations.

READ MORE

Opinion: Facial-recognition technology will turn gardaí into roaming surveillance unitsOpens in new window ]

The push is for such systems to be deployed live, surveilling busy city streets or areas of mass gatherings. In Ireland, let’s say such a system is activated, as might be expected, in Dublin Airport (which already has facial recognition at some gates within the airport – identification surveillance but not active Garda surveillance). Some 28.1 million passengers passed through Dublin Airport in 2022. At the default setting, over 4,600 people annually could be false positives. That’s nearly 400 a month.

Evidence still points to such systems having poorer accuracy with minors and anyone who isn’t a white man. But even if you accept improved accuracy, it isn’t necessarily an improved selling point. As a recent experts’ open letter to The Irish Times on facial recognition technologies points out: “[E]ven if accuracy were to improve, because the technology can be deployed indiscriminately, it risks increasing the problem of over-policing in areas with marginalised groups, leading to disproportionate incrimination, racial and minority ethnic profiling and derailing of people’s lives.”

Yet governments and police forces in Ireland, Britain and the US are all pushing to introduce (or reintroduce) this tool to significant pushback from civil and digital rights campaigners, including the Irish Council for Civil Liberties (ICCL), the American Civil Liberties Union (ACLU) and British groups Liberty, Big Brother Watch and Amnesty.

At the weekend, Tánaiste Micheál Martin said he supported the idea of fast-tracking an amendment to legislation – the Garda Síochána (Recording Devices) Bill currently going through the Dáil – to allow the use of facial recognition technologies here, noting it would only be used in “very selected specific circumstances”, such as child abuse or murder cases.

But that’s exactly the easily-ignored “serious crime” argument that was used to defend the introduction years ago of whole-population communications surveillance by, why yes, now that you mention it, a fast-tracked amendment to existing legislation.

The “serious crime” part of that rushed data communications surveillance amendment – which we were told would be used only for alleged offences in areas like terrorism or child abuse – turned out on implementation to be subject to restrictions and oversight so comically flimsy that, as the data protection commissioner of the time put it, gardaí could seize a person’s communications data if he were cycling through the Phoenix Park without a bike light.

Facial recognition technology for garda use should not be delayed, says Simon HarrisOpens in new window ]

Even once “proper” stand-alone legislation was brought in on data communications surveillance, 62,000 requests for communications data were made in a five-year period, most of those by the Garda. For a small State, that is an extraordinary number of requests, suggesting the nation is awash in nonstop “serious crime”.

What a surprise then, when Irish communications data surveillance legislation ended up at the centre of cases before the Court of Justice of the EU (ECJ), one being the landmark Digital Rights Ireland case in which the court threw out an entire EU directive on data retention on the basis of its Irish implementation.

More recently, it ruled that Ireland’s failure to introduce fresh, compliant legislation to replace the flawed data retention practices that the court effectively threw out in 2014, meant the Garda lacked lawful grounds for seizing and utilising communications records. We have all witnessed how the lack of a sound legislative base risked overturning convictions in actual serious crime.

The EU is soon to introduce a regulatory framework for the use of artificial intelligence. It will apply to the use of facial recognition technologies. As the open letter notes, what is the point of rushing through an amendment likely to be swiftly overridden by EU implementation protocols?

But such technologies raise much larger questions about whether they should be utilised at all, especially at population-scanning scale. Because active surveillance isn’t visible to the public, it’s harder to understand how more comprehensive and intrusive it is than pre-digital era surveillance.

The Stasi could only have dreamed of such always-on, pervasive mass surveillance. Is this really the direction we want to go?