Dutch court halts programme where algorithms ‘hunt’ for welfare fraudsters

SyRI system not only in breach of domestic privacy laws, but is also contrary to the European Convention on Human Rights

In a world in which algorithms are increasingly trusted to run many of the automated systems that manage our lives – often just because they can – it was heartening to see a Dutch court cry halt the other day to the government’s most ambitious programme yet used to “detect” fraud.

Mark Rutte’s coalition government was told by judges in The Hague that not alone was its much-heralded SyRI system – which stands for Systeem Risico Indicatie or system of indicated risks – in breach of domestic privacy laws, but also contrary to the European Convention on Human Rights.

The court also made public a letter from Philip Alston, international lawyer and UN Rapporteur on Extreme Poverty and Human Rights, in which he warned that of its nature the system was weighted against the less well-off and those from minority backgrounds.

On paper SyRI must have looked like a great idea. The idea was to take a lead among EU countries, connecting all government departments to the SyRI system, overseen by the Dutch department of social affairs since 2014.

READ MORE

Algorithms were then created based on profiles of people who had been caught committing social security fraud or housing fraud, for example, and who were blissfully unaware their data was being used. Those algorithms were then “released” into the system to hunt for others with similar profiles – who were allegedly just waiting to do the same.

On that basis blameless individuals were being treated as suspects “without reason”, the court ruled.

It also criticised the “secrecy” surrounding the construction of the algorithms, which it said made them “difficult to monitor and control”.

Programmers

The thing about algorithms – essentially sets of digital instructions that tell a computerised system to work in a particular way in response to particular information – is that they’re only, at this stage of their development at least, as good as the programmers who create them.

Often what they do most strikingly is highlight the biases of their creators as to what they are likely to “detect” when they run the dubious assumptions at the heart of their architecture over the administrative minutiae of our daily lives – what they take to be our identities.

In the case of SyRI they were starting to reveal an even more ominous picture: the self-validating – not to mention often downright incorrect – assumptions of those who run government as to who in society is trying, or likely to try, to rip off the state and why.

In that sense they are essentially a cheap alternative to proper social services where suitably qualified human beings interact with the population, establish which systems work and which don’t, and fix them for the benefit of everyone involved. Sounds like a plan? The problem is it costs a fortune.

Profiling

The same applies when it comes to crime or terrorism, for instance. Profiling only tells you what you already know. It puts you at ease with your expectations. Meanwhile – as has happened – the jihadists turn out to be the blond-haired guys in American-style SUVs speaking English.

There’s good news and bad news in the SyRI case. The good news is that the courts are playing an invaluable role in protecting society.

The bad news is that those in the Dutch government who decided how the algorithms were going to work must have known what they were doing. They must have known better. But they went ahead anyway.