Using AI in election administration could harm democratic process, study finds

Queen’s University Belfast study warns that using AI technologies such as video monitoring to tackle fraud may impair integrity of elections if widely used

Using artificial intelligence (AI) in the administration of elections could potentially “harm our democratic process” and adversely affect minority groups, new research has found.

The Queen’s University Belfast (QUB) study warns that using certain AI technologies such as video monitoring to tackle fraud may impair the integrity of elections if it became widely used.

Concerns about the use of facial-recognition technology at polling stations are raised, with research showing it to be less successful when used with people of colour, women and younger people.

This potential for “disenfranchising minority groups” is of significant concern, according to the paper.


In what is understood to be one of the first studies of its kind, the Queen’s team calls for a “public conversation” around the use of AI in core electoral processes, including the administration of mailing lists and voter identification.

Risks linked to AI in determining where polling stations should be located are also highlighted amid concerns the technology could select largely populated urban areas, potentially leading to an “accentuated exclusion of rural voters or those who can’t independently travel to the polling station”.

The researchers accept the benefits of the technology and “significant enthusiasm across the globe in respect of using AI for all forms of social activity”.

AI expert and computer scientist Dr Deepak Padmanabhan from the school of economics, electrical engineering and computer science at QUB is among the study leads and said it was likely that AI would become “pervasive” in election administration in the near future.

“So we’re raising a flag in order to prompt and inform a public debate. We’re not saying it’s necessarily all bad, but our research uncovered several significant concerns. What we are saying is that using it will fundamentally change the nature of elections and voters need to be aware of that and look more closely at the potential for harm to our democratic processes,” he said.

“The usage of AI within the private sector is often driven by the promise of efficiency, given that efficiency is often treated as the primary criterion in the market-based society that we find ourselves in.

“The use of any technology within the public sector, especially in critical democratic processes such as elections, however, needs to be considered against other criteria. For example, public trust in technology and acceptance of the legitimacy of AI-based outcomes are essential, particularly amongst vulnerable stakeholders such as ethnic minorities.”

Dr Padmanabhan urged caution about “reckless and invisible” AI usage in relation to “back-end processes such as voter list cleansing”.

Prof Muiris MacCarthaigh from QUB’s school of history, anthropology, philosophy and politics said there had already been extensive debate about fake news, ‘deepfake’ images and misinformation to influence election campaigns and manipulate voters.

“But there hasn’t been much focus on the core administrative elements of the election process – in fact, we believe our research to be among the first, if not the first, in this area,” he added.

“We don’t think AI is widespread yet in core electoral processes, although it is being used in some jurisdictions, particularly in the US and parts of Asia. The literature on this is very limited, which is partly what motivated us to want to dig deeper.”

The research is published in AI Magazine.

Seanín Graham

Seanín Graham

Seanín Graham is Northern Correspondent of The Irish Times