Female politicians and journalists abused every 30 seconds on Twitter

Amnesty International trawled through 300,000 tweets mentioning one of 778 women

Female politicians and journalists were abused on Twitter every 30 seconds in 2017, according to the largest-ever study into how women are targeted with hate speech online.

Researchers from Amnesty International and Element AI, an AI software start-up, used volunteers to read through 300,000 tweets mentioning one of 778 women on their list in 2017 and label any abuse targeted at gender, race and sexuality.

The findings, when extrapolated, suggest that 1.1 million abusive tweets were sent to the women on their list, which included all female members of parliament in the UK, female members of Congress in the US, and a number of journalists working at titles across the political spectrum, from Pink News and the Guardian to the Daily Mail and Breitbart.

Women of colour were more likely to be mentioned in abusive tweets. Black women were almost twice as likely as their white counterparts to be targeted. Diane Abbott, the shadow home secretary and Labour politician, received the most abuse of any British woman, with estimates putting the total number of abusive tweets mentioning her at 30,000 last year.

READ MORE

"Abuse of this kind really limits women's freedom of expression online. It makes them withdraw, limit conversations and even remove themselves altogether," said Milena Marin, senior adviser for Tactical Research at Amnesty International, who has been studying the issue for three years.

“We’ve been in dialogue with Twitter for a long time, asking for transparency around the abuse data, but they act as gatekeepers. I don’t think that it’s Amnesty’s job to analyse abuse on Twitter, but we had no choice.”

She added that the data set compiled by the researchers is now the largest of its kind in the world. “We have the data to back up what women have long been telling us – that Twitter is a place where racism, misogyny and homophobia are allowed to flourish basically unchecked,” said Ms Marin.

Ms Abbott said her staff spends “a considerable amount of time” removing abusive tweets and blocking users and that the abuse she suffers is overwhelmingly racist and misogynist.

“I have always felt that this type of hate speech can lead to violence, and Twitter has a responsibility to shut these accounts down a lot quicker then it currently does,” she said. “The sheer volume of abusive comments makes it difficult to block them all...Twitter does not do enough to identify accounts that repeatedly offend.”

Volunteers

The researchers relied on 6,000 volunteers worldwide who were shown anonymised tweets mentioning one of the women on the list and asked to categorise whether it was abusive and how.

Each tweet was analysed by multiple people. The volunteers were given a tutorial and definitions and examples of abusive and problematic content before they began. Element AI then extrapolated data about the scale of abuse that the women faced and developed an app which attempts to spot if a tweet is abusive.

"What we found is that we have algorithms that are comparable to the crowd, but not sufficient, because they did not get it right 100 per cent of the time," said Julien Cornebise, director of research, AI For Good and head of the London office of Element AI.

“Human judgment plays a key part, algorithms can empower human moderators, but it’s a bit like a triage system. It isn’t perfect and cannot entirely replace humans.”

Twitter, like its social media peers, has struggled with the torrent of abuse that users, both male and female, receive in the form of hate speech, fake news and nasty conspiracy theories.

Recently, Jack Dorsey, Twitter's chief executive, launched a search for a new way to measure the "health" of online conversations, as the company tries to reduce abuse, trolls and bots on its platform, while trying to maintain free speech.

Twitter is investing heavily in machine learning, to develop automated tools that can adjudicate abuse reports, but Ms Marin of Amnesty says the company has been guarded about how its algorithms are trained, how many human moderators it employs and how exactly the abuse reports are handled - by man or machine.

"With regard to automation in content removal, we wholly agree that there are currently risks to free expression by the widespread use of automation in content removal," said Vijaya Gadde, global head of legal, policy and trust and safety at Twitter. "Abuse, malicious automation, and manipulation detract from the health of Twitter. We are committed to holding ourselves publicly accountable towards progress in this regard." – Copyright The Financial Times Limited 2018