Fake news and filter bubbles: how to tell truth from lies online

A new initiative aims to give people the tools to question what they read on the internet

Edgar Maddison Welch walked into the Comet Ping Pong pizza restaurant in Washington, DC last Sunday afternoon at about 3pm, armed with an assault rifle, a revolver and a folding knife. The 28-year-old fired two or three shots as staff and customers ran for the exits, then spent the next 45 minutes searching in vain for underground vaults or hidden rooms before surrendering to police.

According to the arrest affidavit, Welch “had read online that the Comet restaurant was harbouring child sex slaves and that he wanted to see for himself if they were there”.

Like millions of others, Welch had become fixated on a bizarre fantasy that had spread across the internet, implicating Hillary Clinton in satanic rituals and paedophilia.

The notion that the Democratic presidential candidate was involved in a child abuse conspiracy first emerged on Twitter in late October and quickly moved to other social media platforms . On November 4th, Alex Jones of the far-right site Infowars posted a YouTube video which has been viewed almost half a million times, suggesting that Clinton was involved in a child sex ring and that her campaign chairman, John Podesta, engaged in satanic rituals.

READ MORE

“When I think about all the children Hillary Clinton has personally murdered and chopped up and raped, I have zero fear standing up against her,” said Jones, who has frequently been praised by Donald Trump.

Accusations

Over the following days, those accusations merged with new ones stemming from WikiLeaks’ release of John Podesta’s emails, which showed Podesta occasionally ate at Comet Ping Pong. On November 7th, the hashtag #pizzagate first appeared on Twitter, where it would be tweeted and retweeted thousands of times.

Fortunately, nobody was hurt last Sunday, but the “Pizzagate” story highlights some of our worst fears about how incendiary lies can now gain traction among millions of people in a polarised, unregulated, largely anonymised media landscape. “Fake news” and “filter bubbles” have been added to the litany of evils already ascribed to the worlds of digital and social media, joining “radicalisation”, “hate speech” and “revenge porn”.

This week the Broadcasting Authority of Ireland (BAI) launched its new Media Literacy Policy, setting out a range of skills to help people navigate new and emerging forms of media, including the skills to “have the critical awareness to query an item of news”. The policy is the first step towards a proposed Media Literacy Network, bringing together educators, regulators and others to address some of the issues.

It’s a daunting task. Traditional media literacy programmes first emerged in the 1970s, and were intended to give young people the knowledge and critical tools to deconstruct the ideological and commercial messages underpinning editorial content and advertising in broadcast or print media.

Safety protocols

In the last 10 years, as the country’s primary and secondary schools became connected to the internet, there was a need to put safety protocols and training in place.

Simon Grehan

of Webwise, the internet safety awareness centre co-funded by the

Department of Education and Skills

and the EU Safer Internet Programme, has been involved in that process since 2003.

“It’s quite unusual in education because it’s a fast-moving field,” he says. “The way children access the internet and the age they start doing it has changed very fast.”

Plans to introduce a short course in Digital Media Literacy as part of the Junior Cycle reforms have been held up by the Department’s dispute with the teachers’ union ASTI. But the BAI’s framework is broader, and includes an aspiration to help all people make informed choices.

Martina Chapman is an expert on European media literacy policy who has been advising the BAI on the new framework. "Ten years ago, it was all about child protection, then it was digital inclusion and giving older people access," she says. "Last year we had the drive around hate speech and radicalisation. Now it's fake news. The skills we needed 30 years ago were far more simple. We now need a whole set of new ones."

She also points out that while education is obviously focused on the protection of children, “we shouldn’t forget about the general public, who can be harder to reach.”

Created and shared

Media is no longer passively consumed – it’s created, shared, liked, commented on, attacked and defended in all sorts of different ways by hundreds of millions of people. And the algorithms used by the most powerful tech companies –

Google

and

Facebook

in particular – are brilliantly designed to personalise and tailor these services to each user’s profile.

As a consequence, the early Utopian dream of the internet as a democratising space where ideas could be freely shared and knowledge made available has been supplanted by a much narrower, nastier reality driven by the inexorable logic of these companies’ profitable exploitation of our own worst instincts.

“People believe technology has opened up access to knowledge, and in many ways it has,” says Chapman. “But, in actual fact, based on their own actions, it can make their world smaller.”

Grehan agrees. “In the past we were looking for sources of bias,” he says. “Now we’re looking for our own bias.”