Subscriber OnlyEuropeAnalysis

Battle over facial recognition as EU makes groundbreaking AI law

EU is first mover in regulating artificial intelligence as it seeks to balance risks with economic potential

A child is missing. Every moment counts in the search to find them. Should a system of cameras in public places be permitted to scan the faces of people as they pass by, using artificial intelligence to try to track down the missing kid?

This was the question that divided the European Parliament on Wednesday as it voted on a landmark law to regulate artificial intelligence, which will be the first of its kind and should set a powerful precedent for regulations worldwide.

Privacy campaigners have argued that the law should ban any real-time biometric identification systems, such as facial scanning, in public spaces.

However, in Wednesday’s voting session, Fine Gael’s centre-right European People’s Party tried to introduce an exception to any such ban to allow for the targeted search for missing people, including children.

READ MORE

The group also pushed for exceptions to allow law enforcement to try to prevent terrorist attacks, and to identify the perpetrators of crimes that carry a sentence of at least three years.

Put to a vote, 300 MEPs voted against allowing such technology to be used in searches for missing children, and 268 in favour. The EPP amendments failed.

The issue of facial recognition will doubtless surface again as the parliament now thrashes out the final version of the law with the European Commission and member states.

The voting session established the parliament’s cross-party position on the law, opening the way for the final stretch of negotiations and for the AI Act to become law by the end of the year, and to come into force two years later.

Though the European Union is one of the earliest movers in seeking to regulate the technology, the applications of AI are becoming a reality faster than lawmakers can keep up with.

In March the French parliament backed a law that would introduce so-called intelligent surveillance systems for the Paris Olympics in 2024.

Supporters say the system would flag abandoned packages and potential crowd crushes, and that it won’t scan faces – though it would detect other physical traits, such as gait and gestures.

Israeli authorities have put in place facial recognition to surveil Palestinians in occupied East Jerusalem, says Amnesty International, which also says products made in the EU have been used in China for the surveillance of Uighurs and other minorities.

In advance of Wednesday’s vote, the campaign group called for the parliament to ban the export of such technologies and prohibit “discriminatory profiling systems that target migrants” at the EU’s borders.

Among EU lawmakers, there is now palpable worry about how AI could affect the European elections due to take place in one year’s time.

Some AI tools allow users to create so-called “deepfake” content. They can mimic someone’s voice and appearance, for example to show a politician saying something they never said.

Turkey’s election in May was the first big vote in which advanced falsified videos played havoc.

One presidential candidate dropped out, saying his face had been spliced into a porn video to create a fake sex tape to discredit him. The ultimate winner, president Recep Tayyip Erdogan, showed supporters an edited video that made it appear that a Kurdish militant leader had endorsed his main rival, associating the opposition with terrorism.

The proposed EU law seeks to strike a balance between limiting the potential harms of AI while still allowing for innovation and development, in a sector with far-reaching implications for the economy.

It imposes an escalating level of restrictions and oversight on AI products says their level of risk.

It bans “unacceptable” risk systems, such as those designed to manipulate human behaviour.

Programmes that might have serious consequences for an individual, such as a CV-filtering programme or a system to screen applications for public assistance benefits, must follow safeguards.

The people affected must have a way to challenge decisions, there must be human oversight, and AI systems must be open to be “audited” by regulators. Content created by AI should also be clearly labelled as such.

Ireland’s MEP for Ireland South Deirdre Clune played a frontline role in shaping the law as the lead negotiator on behalf of the EPP.

In the debate before the vote, she told lawmakers that AI technologies had the potential “to solve the most pressing issues including climate change or serious illness”.

“This law could become the de facto global approach to regulating AI,” she said. “We should be leaders in ensuring that this technology is developed and used in a responsible ethical manner, while also supporting innovation and economic growth.”