The Digital Services Act and the Problem of Preventive Blocking of (Clearly) Illegal Content




digital services act, illegal content, liability of intermediate service providers, content blocking


The adoption of the long-awaited Digital Services Act (DSA) is undoubtedly one of the more significant successes related to the implementation of the ambitious EU Digital Strategy. In addition to important announcements that the new law will help to transform the next few years into Europe’s digital decade, an update of the liability framework for digital service providers also provided an opportunity for a broader reflection on the principles of building governance in cyberspace. Indeed, the notice and takedown model, which had been in place for more than two decades, had become progressively eroded, leading service providers to increasingly implement proactive content filtering mechanisms in an effort to reduce their business risk. The aim of this article is to explore those changes introduced by the DSA which affect the regulatory environment for the preventive blocking of unlawful online content. In this respect, relevant conclusions of the ECtHR and CJEU jurisprudence will also be presented, as well as reflections on the possibility and need for a more coherent EU strategy with respect to online content filtering. The analysis presented will focus on filtering mechanisms concerning mainly with what is referred to as clearly illegal content, as the fight against the dissemination of this type of speech, often qualified under the general heading of “hate speech”, is one of the priority tasks for public authorities with respect to building trust in digital services in the EU.

Author Biography

  • Marcin Rojszczak, Warsaw University of Technology

    Assistant Professor, Faculty of Administration and Social Sciences, Warsaw University of Technology, Warsaw, Poland


Bloch-Wehba, H. (2020). Automation in Moderation. Cornell International Law Journal, 53, 42–96. Online:

Brown, A. (2018). What is so special about online (as compared to offline) hate speech? Ethnicities, 18(3), 297–326.

Cavaliere, P. (2019). Glawischnig-Piesczek v Facebook on the Expanding Scope of Internet Service Providers’ Monitoring Obligations (C‑18/18 Glawischnig-Piesczek v Facebook Ireland). European Data Protection Law Review, 5(4), 573–578.

Cox, N. (2014). Delfi AS v Estonia: The Liability of Secondary Internet Publishers for Violation of Reputational Rights under the European Convention on Human Rights. Modern Law Review, 77(4), 619–629.

Echikson, W., & Knodt, O. (2018). Germany’s NetzDG: A key test for combatting online hate (2018/09). Centre for European Policy Studies. Online:

Ferri, F. (2021). The dark side(s) of the EU Directive on copyright and related rights in the Digital Single Market. China-EU Law Journal, 7, 21–38.

Fino, A. (2020). Defining Hate Speech. Journal of International Criminal Justice, 18(1), 31–57.

Fuster, G. G., & Jasmontaite, L. (2020). Cybersecurity Regulation in the European Union: The Digital, the Critical and Fundamental Rights. In M. Christen, B. Gordijn, & M. Loi (Eds.), The Ethics of Cybersecurity. The International Library of Ethics, Law and Technology, vol 21 (pp. 97–115). Springer International Publishing.

Geiger, C., & Jütte, B. J. (2021). Platform Liability Under Art. 17 of the Copyright in the Digital Single Market Directive, Automated Filtering and Fundamental Rights: An Impossible Match. GRUR International, 70(6), 517–543.

Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), 205395172094323.

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 205395171989794.

Griffin, R. (2022). New school speech regulation as a regulatory strategy against hate speech on social media: The case of Germany’s NetzDG. Telecommunications Policy, 46(9), 102411.

Hochmann, T. (2022). Hate speech online: The government as regulator and as speaker. Journal of Media Law, 14(1), 139–158.

Isaac, A., Kumar, R., & Bhat, A. (2022). Hate Speech Detection Using Machine Learning Techniques. In S. Aurelia, S. S. Hiremath, K. Subramanian, & S. Kr. Biswas (Eds.), Sustainable Advanced Computing, vol 840 (pp. 125–135). Springer.

Jørgensen, R. F., & Pedersen, A. M. (2017). Online Service Providers as Human Rights Arbiters. In M. Taddeo, & L. Floridi (Eds.), The Responsibilities of Online Service Providers, vol 31 (pp. 179–199). Springer International Publishing.

Julià-Barceló, R., & Koelman, K. J. (2000). Intermediaty Liability. Computer Law & Security Review, 16(4), 231–239.

Jütte, B. J. (2022). Poland’s challenge to Article 17 CDSM Directive fails before the CJEU, but Member States must implement fundamental rights safeguards. Journal of Intellectual Property Law & Practice, 17(9), 693–695.

Keller, D. (2020). Facebook Filters, Fundamental Rights, and the CJEU’s Glawischnig-Piesczek Ruling. GRUR International, 69(6), 616–623.

Kikarea, E., & Menashe, M. (2019). The global governance of cyberspace: Reimagining private actors’ accountability: Introduction. Cambridge International Law Journal, 8(2), 153–170.

Klonick, K. (2018). The New Governors: The People, Rules, and Processes Governing Online Speech. Harvard Law Review, 131(6), 1598–1670. Online:

Koebler, J., & Cox, J. (2018, August 23). The impossible job: inside Facebook's struggle to moderate two billion people. Vice. Online:

Kuczerawy, A. (2020). From ‘Notice and Takedown’ to ‘Notice and Stay Down’: Risks and Safeguards for Freedom of Expression. In G. Frosio (Ed.), Oxford Handbook of Online Intermediary Liability (pp. 523–543). Oxford University Press.

Lee, H.-E., Ermakova, T., Ververis, V., & Fabian, B. (2020). Detecting child sexual abuse material: A comprehensive survey. Forensic Science International: Digital Investigation, 34, 301022.

Leerssen, P. (2023). An end to shadow banning? Transparency rights in the Digital Services Act between content moderation and curation. Computer Law & Security Review, 48, 105790.

Lemmens, K. (2022). Freedom of Expression on the Internet after Sanchez v France: How the European Court of Human Rights Accepts Third-Party ‘Censorship’. European Convention on Human Rights Law Review, 3(4), 525–550.

Llansó, E. J. (2020). No amount of “AI” in content moderation will solve filtering’s prior-restraint problem. Big Data & Society, 7(1), 205395172092068.

Mitrakas, A. (2018). The emerging EU framework on cybersecurity certification. Datenschutz Und Datensicherheit – DuD, 42(7), 411–414.

Molter, S. (2022). Combating hate crime against LGBTIQ* persons. Institute for Social Work and Social Education. Online:

Parsons, C. (2019). The (In)effectiveness of Voluntarily Produced Transparency Reports. Business & Society, 58(1), 103–131.

Paz, M. A., Montero-Díaz, J., & Moreno-Delgado, A. (2020). Hate Speech: A Systematized Review. SAGE Open, 10(4), 215824402097302.

Peršak, N. (2022). Criminalising Hate Crime and Hate Speech at EU Level: Extending the List of Eurocrimes Under Article 83(1) TFEU. Criminal Law Forum, 33(2), 85–119.

Rauchegger, C., & Kuczerawy, A. (2020). Injunctions to Remove Illegal Online Content under the Ecommerce Directive: Glawischnig-Piesczek. Common Market Law Review, 57(5), 1495–1526.

Rojszczak, M. (2023). Gone in 60 Minutes: Distribution of Terrorist Content and Free Speech in the European Union. Democracy and Security, 1–31.

Romero-Moreno, F. (2020). ‘Upload filters’ and human rights: Implementing Article 17 of the Directive on Copyright in the Digital Single Market. International Review of Law, Computers & Technology, 34(2), 153–182.

Romero-Moreno, F. (2019). ‘Notice and staydown’ and social media: Amending Article 13 of the Proposed Directive on Copyright. International Review of Law, Computers & Technology, 33(2), 187–210.

Siegel, A. A. (2020). Online Hate Speech. In N. Persily, & J. A. Tucker (Eds.), Social Media and Democracy: The State of the Field, Prospects for Reform (1st ed.) (pp. 56–88). Cambridge University Press.

Spano, R. (2017). Intermediary Liability for Online User Comments under the European Convention on Human Rights. Human Rights Law Review, 17(4), 665–679.

Spindler, G. (2017). Responsibility and Liability of Internet Intermediaries: Status Quo in the EU and Potential Reforms. In T.-E. Synodinou, P. Jougleux, C. Markou, & T. Prastitou (Eds.), EU Internet Law (pp. 289–314). Springer International Publishing.

Spindler, G. (2019). The Liability system of Art. 17 DSMD and national implementation. Journal of Intellectual Property, Information Technology and E-Commerce Law, 10(3), 344–374. Online:

Teršek, A. (2020). Common and Comprehensive European Definition of Hate-Speech Alternative Proposal. Open Political Science, 3(1), 213–219.

Wang, J. (2018). Notice-and-Takedown Procedures in the US, the EU and China. In J. Wang, Regulating Hosting ISPs’ Responsibilities for Copyright Infringement (pp. 141–178). Springer.

Westerman, D., Spence, P. R., & Van Der Heide, B. (2014). Social Media as Information Source: Recency of Updates and Credibility of Information. Journal of Computer-Mediated Communication, 19(2), 171–183.

Wilman, F. G. (2022). Two emerging principles of EU internet law: A comparative analysis of the prohibitions of general data retention and general monitoring obligations. Computer Law & Security Review, 46, 105728.

Wu, F. T. (2013). Collateral Censorship and the Limits of Intermediary Immunity. Notre Dame Law Review, 87(1), 293–349. Online:




How to Cite

The Digital Services Act and the Problem of Preventive Blocking of (Clearly) Illegal Content. (2023). Institutiones Administrationis - Journal of Administrative Sciences, 3(2), 44-59.