Computer keyboard button reading Digital Services Act © Cristian Storto/Shutterstock

© Cristian Storto/Shutterstock

With the Digital Services Act, the European Union aims to establish new standards on the obligations and responsibilities of social networks and other large platforms in the management of problematic content. The result is the outcome of an ongoing dialogue between the Commission, Parliament, and the Council, with the involvement of civil society

26/05/2022 -  Federico Caruso

In the early hours of Saturday 23 April, a political agreement  was reached between the European Parliament and the Council of the European Union on the text of the Digital Services Act (DSA). In a nutshell, this regulation intends to define common standards on how to manage illegal or harmful content published on digital platforms. The measures are largely aimed at platforms with over 45 million users, therefore large social networks such as Facebook, Twitter, or TikTok, but also search engines such as Google and e-commerce sites such as Amazon.

The law places on the latter a number of responsibilities and obligations aimed at protecting users in terms of combating disinformation, hate speech and in general the protection of digital rights, against the backdrop of an enormous concentration of power in the hands of a few companies in deciding what may or may not be considered "harmful" or "dangerous" content. For transgressors, fines can reach up to 6 percent of the turnover.

The importance of the law has been stressed by European institutions and various actors who are committed to the protection of users' rights. As can be expected for a regulation that intervenes on such a complex issue, the positive comments have been mixed with those pointing out the limits and missed opportunities.

A few weeks after the announcement, and although we are still awaiting the final text that will be published following the technical meetings of these days, let's try to make some considerations on some of the most relevant measures contained in the law.

Transparency of algorithms

The main changes introduced by the regulation include the obligation for large platforms to regularly conduct assessments on the possible negative impact of their services on society and to undergo independent checks. Furthermore, transparency must be ensured on the algorithms used to establish what content to show to users and with what priority. The measure is part of a package  of amendments introduced by the European Parliament and aims to ensure freedom of expression and pluralism of information on the one hand, and the fight against disinformation on the other. “The platforms will have to publish the main criteria used by their algorithms” – Patrick Breyer, German MEP from the Greens group, explained to OBCT. “Researchers will also have access to it and will be able to investigate the effects of algorithms”.

In principle, this is a positive initiative, but its practical application raises some doubts. As anonymous sources inside Twitter told Wired , "the first problem is that there is no single algorithm that guides Twitter's decisions, unlike what certain statements by Elon Musk imply". The decisions, the informants explain, are the result of the interaction between different algorithms that perform "a complex dance on top of a mountain of data and a multitude of human actions". Furthermore, the algorithms used by Twitter (but the principle applies to all large platforms) are based on machine learning systems that make decisions based on constantly evolving models. It is therefore not possible to "investigate the algorithm" simply by publishing the source code online. In fact, a realistic simulation of what happens on the platform should be recreated. But it is not certain that this will give reliable results, because in the meantime the models of the actual platform will have changed in response to the enormous flow of incoming data.

Content moderation

On the subject of transparency and combating disinformation, the DSA also provides for an improvement in content moderation, introduced upon the proposal of the European Parliament. Large platforms will have to directly inform users of any deleted content, shadow ban initiatives (the practice of hiding a user's comments without deleting them) or account blocking, and offer channels to object. Furthermore, users will be able to report content deemed illegal, with the obligation for platforms to follow up on reports in a transparent manner.

Among the objectives of the regulation is the limitation of the discretion that platforms have today in removing or hiding content that they deem in some way harmful. According to Breyer, "legal content should not be removed as allegedly harmful, to protect freedom of expression". The German MEP goes further, arguing that "it is not up to platforms to monitor their services in search of potentially illegal content. The problem is not the accessibility of that content but the use of algorithms that, to generate profit, spread problematic content to users who do not want to see it. I hope that the DSA pushes platforms to hire more staff, better trained and paid, rather than increasing the use of error-prone censoring algorithms".

Recommender system, targeted advertising, dark patterns

The EP's package of amendments also touches on the issue of the algorithms that establish the criteria with which the platforms select and give relevance to the content shown to the user: in practice what we see (or do not see) when we scroll through the newsfeed of our social profiles or we explore YouTube with our account. The goal was, once again, to ensure greater transparency, but also to give the user the freedom to choose which criteria to base their "social" experience on. It is an important issue because the contents that generate the most engagement are those that polarise the debate, often linked to extreme political positions, disinformation, hate speech, and it is good for users to know that what they see is (also) chosen on the basis of these dynamics, and that they may opt out of it.

The DSA will introduce the obligation for platforms to explain according to which criteria this selection takes place, and to offer an option for the contents to be selected according to criteria not based on user profiling, for example in purely chronological order. If this is certainly positive, according to some the law could have dared more. Some, like Breyer, expected a step forward towards the interoperability of the service, that is the possibility for the user to be granted "the right to use external classification algorithms" (I.e. developed by organisations external to the platforms ), based on transparent criteria and under the control of the user, but this was not the case.

Thanks to the contribution of the EP, important progress has been made on the definitive ban on targeted advertising  and so-called dark patterns . On the first topic, the new law establishes that platforms will no longer be able to show targeted advertising to minors, and that in any case this cannot be based on the types of sensitive data established by the GDPR (state of health, religion, political and sexual orientation, etc.). The EP foresees that there will be room for action in the future also on the subject of data tracking and related advertising surveillance. "There will be an opportunity to address the issue of 'surveillance advertising' in the ePrivacy regulation regulation”, explained Breyer. “The European Parliament is also pressing for the introduction of a mechanism that prevents user tracking (do not track ) and the right to access one's browsing data for those who use it. Even the dossier on political advertising [another legislative proposal under discussion that will complement the DSA, ed.] is an opportunity to deal with surveillance ads for political purposes".

Although with limited progress with regard to dark patterns, i.e. those design expedients that aim to encourage the user to make a certain choice (for example, sign a contract or share their personal data), the text approved by the European Parliament specifies that they will only be banned on platforms affected by the DSA, and not in general.

From theory to practice

The focus is now on the applicability of the regulation. To prevent what happened with the GDPR, which in the face of a generally appreciated text then had to clash with the lack of interest of some national agencies in imposing fines on large companies with registered offices in the country (Ireland and Luxembourg being the most striking cases), the European Commission has taken on the application of the DSA, in collaboration with the authorities of the various member states.

First of all, there is the question of human resources. As Politico  explains, the Commission has announced the hiring of 150 people including lawyers and algorithm experts, and one wonders if they will be enough. To make a comparison, the UK aims to hire 500 people who will be responsible for ensuring the implementation of the Online Safety Bill , a law similar to the DSA.

The big platforms will pay for that directly, as they will be asked to pay up to 0.05 percent of their annual earnings, with the Commission aiming to raise around 30 million Euros per year. According to anonymous sources consulted by Politico, this measure was necessary because the application of the DSA was not part of the negotiation of the EU 2021-2027 budget.

In the next few days the final text of the law is expected to be published, which will then be formally approved by the Parliament and the Council of the European Union. The regulation will begin to be applicable 15 months after its entry into force, or in any case no later than January 1, 2024. From that moment the large platforms will have an additional four months to adapt to the new rules.

The action is co-financed by the European Union in the frame of the European Parliament (EP)'s grant programme in the field of communication. The EP is, in no case, responsible for or bound by the information or opinions expressed in the context of this action. The contents are the sole responsibility of OBC Transeuropa and can in no way be taken to reflect the views of the European Union. Go to the project’s page: “The Parliament of rights 3”.


I commenti, nel limite del possibile, vengono vagliati dal nostro staff prima di essere resi pubblici. Il tempo necessario per questa operazione può essere variabile. Vai alla nostra policy

blog comments powered by