Subscriber OnlyTechnology

Digital Services Act: The most significant legislation you’ve never heard of

Many of its operational cogs and wheels run behind the scenes, meaning you probably didn’t notice anything

The European Union’s Digital Services Act (DSA), one of the most significant pieces of legislation yet in the struggle to rein in technology giants and limit the societal problems they create, came into force last Friday.

But you probably didn’t notice. Many of its operational cogs and wheels run behind the scenes, in new obligations and liabilities placed upon technology companies. Companies have also made it look as if there’s nothing new to see and continue to nudge users towards choices that benefit the platforms, not people.

At best, users might have received a pop-up notification asking whether they wanted to look at content in chronological order, or, as the companies phrase it, “personalised” and “tailored” to your supposed interests.

YouTube served me a little pop-up along those lines. I clicked to choose a chronological presentation, mindful of how the alternative – that cheerfully presented tailored experience – has been the source of many of social media’s most pervasive problems. Watch one video on a subject, view one post and like it, click on one ad, follow a hashtag, comment on an image or a post, link to anything and the algorithm inevitably serves you further related content.


If the video, post or image relates to an eating disorder, disinformation, self harm or promotes violence or a conspiracy theory, then you’ll likely get more of the same, whatever the platform. You’ve indicated your interests and preferences, as far as the platform algorithms are concerned.

Because platforms disclose little about their algorithms, fixing this is a serious challenge. As artificial intelligence is increasingly thrown into the mix of how to determine your interests and preferences, it’s likely to get worse. This is because it acts in ways no one expects or understands. That’s one of the most important truths about AI – nobody fully understands how it works.

This process of recording your interests and preferences is called profiling, which enables technology platforms and advertising giants to compile revealing information portraits about users. Profiles form the basis of complex digital advertising industry activities whereby you can be endlessly subcategorised, and access to you sold via targeted advertising. Profiling helps companies to keep you scrolling, too, luring you to offer up even more data, an endless circle of profit-driven “engagement”.

Anyway, back to the DSA. With this piece of legislation, new constraints are imposed on how companies gather user data and what they can do with it. Companies have to reveal more about how their algorithms work. And they have to give users more choice and control.

All of this is promising, in theory. Most people, however, don’t understand how opting for “personalisation” feeds into invasive data-gathering, nor is it made clear that personalisation benefits companies more than users. Yes, you might see a bit more of what interests you but the cost – you, analysed, defined, packed into revealing categories and sold as a data set or a target audience – is high.

Yet, the DSA requirement that companies let users choose whether they want “personalisation” or not is being presented by platforms as an option without significant, privacy-impacting consequences. On most platforms, it’s already a struggle to find the many controls users need to better protect their own information. It’s tedious and time consuming to change away from the default of “platform sucks up just about everything and shares just about everything”.

LinkedIn and Facebook are cases in point. Both gather enormous amounts of user data and detail and want to share it with other users and third parties. Limiting this involves finding, then changing, dozens of settings. Post DSA, platforms continue this “baffle ‘em” approach and present the “personalisation” option – letting the platform at your data – as the more attractive choice.

Consider Facebook, on which a small DSA popup told me it was now “sharing more details about our policies and how we apply them”, with an option to dismiss the pop-up or “learn more”. I was the one in a million to click learn moreand got a lengthier popup stating Facebook will apply new “terms” on October 20th. “By continuing to use our technologies after that date, you accept the updates to the terms,” it stated.

I had to laugh. This circuitous, dismissible, detail-free ramble is hardly the “informed consent” required by the General Data Protection Regulation. How can it satisfy the DSA?

This questionable compliance by pop-up across several platforms indicates the DSA has some significant user-end weaknesses that must be addressed. Allowing platforms to continue with such serpentine evasiveness places an unfair burden on platform users who cannot clearly see the personalisation versus privacy trade-off and are beaten into bored submission by unintuitive, obscured user controls and unclicked links to privacy policy legalese.

The DSA is a welcome start but more must to be done to make platforms directly accountable to users and not just regulators, and offer meaningful control through privacy-protecting default settings (think Apple’s user-centric approach to third party data access on iPhone), rather than unwanted expeditions into user settings and policy labyrinths.