Social media giants must be excluded from online safety watchdog role

Advisory role for firms such as Facebook would amount to self-regulation

“Facebook walks and talks like a media company, so it should be treated like one. The cultural change would be significant – and content moderation would change as a consequence.” Photograph: Dominic Lipinski/PA Wire
“Facebook walks and talks like a media company, so it should be treated like one. The cultural change would be significant – and content moderation would change as a consequence.” Photograph: Dominic Lipinski/PA Wire

Facebook is huge, with more than 2.2 billion monthly actively users. There are also all those users on WhatsApp and Instagram, both owned by Facebook. One in three human beings use Facebook every month. In 1966 John Lennon claimed that the Beatles were "more popular than Jesus". It was a very controversial statement at the time. There are 2.2 billion adherents to Christianity. Facebook has achieved what the Beatles did not.

And its users are active. Every minute on Facebook more than 500,000 comments are posted, almost 300,000 statuses are updated and more than 130,000 photos are uploaded. Every minute.

The company is also wealthy – it’s worth more than $600 billion. In the first three months of 2018 it had revenue of $12 billion – almost 50 per cent more than the same period in the year before – for a record net profit of almost $5 billion – up 65 per cent. Most of Facebook’s revenue comes from advertising.

However, over the past several months we have seen Facebook deal with scandal after scandal: massive personal data breaches, Cambridge Analytica and impacts on major international elections.

READ MORE

Disturbing content

Then came Channel 4’s shocking revelations earlier this week. Facebook content moderation was shown to be severely lacking, with examples of extremely disturbing content left on the platform. Of course, Facebook faces major technical challenges in policing its platform. However, we saw that the policies being used to decide what should be taken down were clearly inappropriate.

The problem Facebook has is not that its moderators are independently doing the wrong thing, it is that they are being told the wrong thing to do. Facebook appears to tolerate extremely disturbing content, hiding behind the desire to not be seen as over-censoring activity on the site or compromising free speech. But the examples presented were not about that – they comprised disturbing, gratuitous and unnecessary, content.

It might have come as a surprise to many that much of the content moderation is handled manually and that where automated tools are used they often don’t do what we might expect. Building automated tools for these tasks is extremely challenging. These tasks present major challenges to artificial intelligence (AI).

Many AI systems for content moderation are built by “training by example” – what we call machine learning in the AI world. The examples used for training are regarded as representing the perfect response the system should make – what we call the “ground truth” in AI.

A challenge that Facebook will face is that as it tries to develop AI methods for automated content management its ground truth is compromised: what Facebook seems to be doing isn’t what it says it wants to do.

Therefore, we should be concerned about how well the company can solve its problems using automated tools. It needs to get the basics right first: bad content must be marked as bad and removed. If Facebook can’t get its human content moderators to do the job properly, there is little hope for the AI methods it might wish to build to assist or replace them.

However, the real challenges facing Facebook are not technical but cultural. Facebook describes itself as a technology company, not a media company. The differences are significant. As a technology company, Facebook sees itself as the provider of a platform for its users to share content, while the company makes money through advertising.

To do this it uses what it knows about its users to serve up the right advertisements using its core set of algorithms. It doesn't have to concern itself with the nature of the content so much, since it sees itself as merely responsible for the technology. Of course it has a view on content but, as a technology company, this is of secondary concern. A brilliant paper by researchers at MIT,which was published in Science in March, showed that news spreads more quickly and more widely when it is false. Therefore, as a technology company, there is a conflict of interest between profit-making from advertising and content moderation.

However, if Facebook saw itself as a media company, then the game would change entirely. It would need to take greater responsibility for content, for truth, and accuracy, since it has a publishing role. Facebook walks and talks like a media company, so it should be treated like one. The cultural change would be significant – and content moderation would change as a consequence.

Action plan

Last week the Government published its action plan for online safety. It was an inadequate plan in many ways. There has been much Government flip-flopping on the establishment of a digital safety commissioner. This role is urgently needed. In the action plan the Government spoke of a national advisory committee on online safety, and said Facebook and other stakeholders would be part of that. It would be inappropriate to have Facebook, or similar companies, on such a committee given their commercial interests. It would be tantamount to self-regulation.

A number of pieces of legislation working their way through the Oireachtas should be progressed as a matter of urgency. For example, the Harassment, Harmful Communications and Related Offences Bill 2017 (sponsored by Labour leader Brendan Howlin) and the Digital Safety Commissioner Bill 2017 (sponsored by Donnchadh Ó Laoghaire of Sinn Féin) should be expedited.

Penalties must follow as a consequence of failure to uphold content standards. The European Commission has demonstrated that it has the backbone to hold multinationals to account. For example, the General Data Protection Regulation has been a game-changer for data protection. This week the commission levied a heavy fine against Google. A similar approach is needed in regulating content on social media platforms, with similar levels of penalties. The problem we are dealing with is an international one, so we need both a national and international response.

Finally, it must be acknowledged that social media platforms have also been a huge force for good. They are invaluable in many ways in terms of providing a mechanism for people to stay connected, to get access to new ideas and (accurate) news and information. But they face enormous challenges. I’m reminded of the title of the album by the painter Julian Schnabel which was released in 1995: “Every silver lining has a cloud.”

Barry O’Sullivan is a professor at the Department of computer science at UCC and a director of the Insight Centre for Data Analytics. He is also president of the European Artificial Intelligence Association and a member of the European Commission’s high-level expert group on artificial intelligence.