Twitter faces legal challenge after failing to remove reported hate tweets

HateAid in Germany alerted social media giant to antisemitic and racist tweets, which were not taken down

Twitter faces a landmark legal challenge after the social media giant failed to remove a series of hate-filled tweets reported by users in what could be a turning point in establishing new standards of scrutiny regarding online anti-Semitism.

The California-based company, owned since last year by Elon Musk, was alerted to six anti-Semitic or otherwise racist tweets in January this year by researchers at HateAid, a German organisation that campaigns for human rights in the digital space, and the European Union of Jewish Students EUJS but did not remove them from its platform despite the tweets apparently clearly contravening its own moderation policy.

Four of the tweets denied the Holocaust in explicit terms, one said “blacks should be gassed and sent with space x to Mars”, while a sixth compared Covid vaccination programmes to mass extermination in Nazi death camps.

All were reported in January but Twitter ruled that three of the tweets did not violate its guidelines and failed to respond to the other reports, the legal action claims.


HateAid and the EUJS applied earlier this year to a Berlin court to have the tweets deleted, arguing the tweets broke German law and that Twitter had failed to meet contractual obligations to provide a secure and safe environment for its users.

Twitter has received notice of the legal action and has since acted to block some of the offending tweets.

Avital Grinberg of the EUJS, who reported some of the tweets, said the decision to take legal action had been taken out of “despair, disappointment and anger”.

“All our efforts and advocacy have led nowhere and Twitter has become a space where anti-Semitism and Holocaust denial is just growing and growing. This is so much bigger than us, so we needed the biggest and strongest tool that democracy has to offer and that is the law,” she said.

Experts in extremist violence said the evidence that online hate encouraged physical attacks on targeted minorities was undeniable.

Repeated studies have detected a huge surge in anti-Semitic online content since the Covid-19 pandemic while in 2022 the Anti-Defamation League in the US tracked the highest number of anti-Semitic incidents in the US since they started recording in 1979.

Twitter has faced repeated accusations of failing to act against online hate in recent years but these have intensified since Mr Musk took over in October.

The billionaire, who has described himself as a “free speech absolutist”, restored the accounts of thousands of users who had been banned from the platform, including white supremacists with a history of involvement in neo-Nazi propaganda.

At the same time, Mr Musk dissolved Twitter’s independent Trust and Safety Council responsible for advising on tackling harmful activity on the platform and dramatically cut staff, reportedly including those working on content moderation. Others have resigned.

Mr Musk himself caused controversy in May by describing Jewish philanthropist George Soros, a frequent target of abuse online, as wanting “to erode the very fabric of civilisation”, a frequent and historical anti-Semitic trope.

Repeated efforts by the Guardian to contact Twitter were unsuccessful. An email containing a detailed list of the alleged failures sent to Twitter’s press relations office was answered by an automated response of a poo emoji.

Josephine Ballon, the head of legal at HateAid, said the aim of the legal action was to force Twitter to take more responsibility for content on the site.

“Freedom of expression does not just mean the absence of censorship but ensuring that Twitter is a safe space for users who can be free of fear of being attacked or receiving death threats or Holocaust denial. If you are a Jewish person on Twitter then the sad reality is that it is neither secure nor safe for you,” Ms Ballon said.

“We are not demanding anything unreasonable ... Just that their moderation is good enough to take down this very dangerous content. This would signal to their users that Twitter care about keeping them safe.” – Guardian