Elon Musk’s vision for free speech on X tested by Israel-Hamas war misinformation

Platform’s owner contends with wave of false or misleading content that has alarmed EU officials and advertisers

Misinformation around the Israel-Palestinian conflict sweeping across Elon Musk’s X has prompted fresh scrutiny of the social media platform from European regulators and new concern from global advertisers.

As the crisis took hold, researchers raced to debunk false or misleading information on the platform formerly known as Twitter. The posts, which have racked up millions of views and shares, include graphic imagery taken out of context, doctored photos and even videos of violent fighting that originated from a video game.

In a letter addressed to Mr Musk on Tuesday, EU commissioner Thierry Breton wrote that the European Commission had “indications” that the platform was “being used to disseminate illegal content and disinformation” in the wake of Hamas’s attacks against Israel.

Invoking the EU’s Digital Services Act, Breton warned Mr Musk that the company was required to have “proportionate and effective mitigation measures” in place to tackle disinformation. “We have, from qualified sources, reports about potentially illegal content circulating on your service despite flags from relevant authorities,” he added.

READ MORE

In response, Mr Musk wrote on X: “Our policy is that everything is open source and transparent, an approach that I know the EU supports. Please list the violations you allude to on X, so that that [sic] the public can see them. Merci beaucoup.”

Mr Breton replied: “You are well aware of your users’ – and authorities’– reports on fake content and glorification of violence. Up to you to demonstrate that you walk the talk.”

Budget 2024: What it means for households and businesses

Listen | 37:33

Mr Musk, a self-declared “free speech absolutist”, has dramatically overhauled the platform he bought last year, shedding much of its workforce, including trust and safety staff, and loosening its moderation policies.

Misinformation, propaganda and deliberate disinformation campaigns on social media are endemic in a conflict, particularly in the earliest days. There was, for example, a surge in misinformation at the start of Russia’s full-scale invasion of Ukraine in early 2022 across platforms including X, TikTok and Meta.

But experts argue that Mr Musk’s decision to strip back moderation resources, together with certain product changes, have allowed misinformation to proliferate at scale on X in new ways.

“The differences in the platform architecture that Elon Musk has put in place are making it so much harder to assess the credibility of a source,” said Emerson Brooking, senior fellow at the Digital Forensic Research Lab of the Atlantic Council.

In particular, Mr Brooking pointed to Mr Musk’s decision to open up access to the blue check marks that once denoted verified celebrities, journalists or experts to anyone who pays an $8 (€7.54)-a-month subscription. These have made it easier to masquerade as a media outlet or an objective party, he said, while the algorithm now promotes the content of those paying users over that of others.

Mr Brooking and others also note the consequences of a “creator” programme introduced in July that gives cash to X’s top users through an advertising revenue share.

“It encourages posting as often as possible, and the claims as salacious as possible because the users are trying to maximise impressions on individual posts,” Brooking said. “I think a number of actors saw the audience and attention that surrounded Russia/Ukraine in 2022 and they want a piece of it.”

According to Arieh Kovler, a Jerusalem-based political analyst and independent researcher, some of the misinformation is first generated on channels on messaging app Telegram before being shared elsewhere. Mr Kovler said most of this content was being shared on to X “in good faith”, with users unable to understand the context due to language barriers, for example. But he added: “Even if you’re being an honest broker, you retweet and get lots of likes ... if you click delete, maybe it costs you your $500? Will you?”

Other critics have pointed to Mr Musk’s own activity on the platform after he recommended users follow two accounts that have been shown to peddle misinformation, in since-deleted tweets that garnered millions of views.

“His behaviour on the platform – he sets the tone from the top – he’s saying it’s OK to spread conspiracy theories as he does it himself,” said Kayla Gogarty, research director at left-leaning non-profit Media Matters. “This is the first big test for Musk’s version of X. The platform has failed this test.”

In a post published on X’s safety account on Monday, the platform said it had seen an increase in daily active users in the conflict areas, and 50 million posts related to the Hamas attack. It said it had taken action with regard to “tens of thousands of posts” for sharing graphic media, violent speech and hateful conduct. It said it has also removed several hundred accounts for attempting to manipulate trending topics and was removing newly created Hamas-affiliated accounts.

Linda Yaccarino, X’s chief executive, wrote in an internal memo to staff that the company had “redistributed resources, refocused internal teams and activated more partners externally to address this rapidly evolving situation”. She added that a “cross-company leadership taskforce” had been convened to work on how to address the crisis.

Still, the conflict risks further alienating advertisers from X, many of whom left the platform last year over a lack of reassurance that their adverts would not run alongside toxic or negative content. Due to the advertiser exodus, revenues were down by 60 per cent in the US, Mr Musk said last month, without specifying a time frame.

Several advertising agency executives told the FT that they already did not recommend advertising on the platform, and their position remained unchanged. However, for those who remain, the impact may be chilling.

“The misinformation on X and lack of control is further damaging X’s credibility,” said Sir Martin Sorrell, boss at digital marketing agency S4 Capital, adding that chief marketing officers were increasingly concerned.

“Musk’s unpredictable behaviour and polarising political views have meant that it’s been a platform to avoid for a good while now,” one advertising agency executive said. “That, coupled with the alleged misinformation circulating on it regarding the situation in Israel and Gaza, I would say it’s never been a more damaging place for brands to show up.”

Mr Brooking said he had observed largely financially rather than ideologically driven users seizing on X’s weaknesses. But he warned: “As time passes, more terrorist messengers and war propagandists will see the regime Musk has put in place as an opportunity to exploit.”

Some of the false claims and debunked information circulated on X about the Israel-Hamas conflict

• One user claimed that a video of blazing red fires across a city, viewed at least 1.1 million times, showed “what was happening in Gaza”. In fact, it showed firework celebrations by supporters of an Algerian football team in 2020.

• Verified accounts helped disseminate a document purporting to show that the Biden administration had authorised an $8 billion aid package to Israel. Rather, the document was one that had been released by the White House in July related to Ukraine, and doctored.

• One video, shared millions of times, claimed to show a Hamas militant shooting down an Israeli helicopter with a rocket on their shoulder. In fact the footage was from the video game Arma 3.

Source: Media Matters

– Copyright The Financial Times Limited 2023