The Digital Services Act and the Problem of Preventive Blocking of (Clearly) Illegal Content

Authors

DOI:

https://doi.org/10.54201/iajas.v3i2.85

Keywords:

digital services act, illegal content, liability of intermediate service providers, content blocking

Abstract

The adoption of the long-awaited Digital Services Act (DSA) is undoubtedly one of the more significant successes related to the implementation of the ambitious EU Digital Strategy. In addition to important announcements that the new law will help to transform the next few years into Europe’s digital decade, an update of the liability framework for digital service providers also provided an opportunity for a broader reflection on the principles of building governance in cyberspace. Indeed, the notice and takedown model, which had been in place for more than two decades, had become progressively eroded, leading service providers to increasingly implement proactive content filtering mechanisms in an effort to reduce their business risk. The aim of this article is to explore those changes introduced by the DSA which affect the regulatory environment for the preventive blocking of unlawful online content. In this respect, relevant conclusions of the ECtHR and CJEU jurisprudence will also be presented, as well as reflections on the possibility and need for a more coherent EU strategy with respect to online content filtering. The analysis presented will focus on filtering mechanisms concerning mainly with what is referred to as clearly illegal content, as the fight against the dissemination of this type of speech, often qualified under the general heading of “hate speech”, is one of the priority tasks for public authorities with respect to building trust in digital services in the EU.

Author Biography

  • Marcin Rojszczak, Warsaw University of Technology

    Assistant Professor, Faculty of Administration and Social Sciences, Warsaw University of Technology, Warsaw, Poland

References

Bloch-Wehba, H. (2020). Automation in Moderation. Cornell International Law Journal, 53, 42–96. Online: https://ssrn.com/abstract=3521619

Brown, A. (2018). What is so special about online (as compared to offline) hate speech? Ethnicities, 18(3), 297–326. https://doi.org/10.1177/1468796817709846

Cavaliere, P. (2019). Glawischnig-Piesczek v Facebook on the Expanding Scope of Internet Service Providers’ Monitoring Obligations (C‑18/18 Glawischnig-Piesczek v Facebook Ireland). European Data Protection Law Review, 5(4), 573–578. https://doi.org/10.21552/edpl/2019/4/19

Cox, N. (2014). Delfi AS v Estonia: The Liability of Secondary Internet Publishers for Violation of Reputational Rights under the European Convention on Human Rights. Modern Law Review, 77(4), 619–629. https://doi.org/10.1111/1468-2230.12081

Echikson, W., & Knodt, O. (2018). Germany’s NetzDG: A key test for combatting online hate (2018/09). Centre for European Policy Studies. Online: https://cli.re/Bvv1Zx

Ferri, F. (2021). The dark side(s) of the EU Directive on copyright and related rights in the Digital Single Market. China-EU Law Journal, 7, 21–38. https://doi.org/10.1007/s12689-020-00089-5

Fino, A. (2020). Defining Hate Speech. Journal of International Criminal Justice, 18(1), 31–57. https://doi.org/10.1093/jicj/mqaa023

Fuster, G. G., & Jasmontaite, L. (2020). Cybersecurity Regulation in the European Union: The Digital, the Critical and Fundamental Rights. In M. Christen, B. Gordijn, & M. Loi (Eds.), The Ethics of Cybersecurity. The International Library of Ethics, Law and Technology, vol 21 (pp. 97–115). Springer International Publishing. https://doi.org/10.1007/978-3-030-29053-5_5

Geiger, C., & Jütte, B. J. (2021). Platform Liability Under Art. 17 of the Copyright in the Digital Single Market Directive, Automated Filtering and Fundamental Rights: An Impossible Match. GRUR International, 70(6), 517–543. https://doi.org/10.1093/grurint/ikab037

Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), 205395172094323. https://doi.org/10.1177/2053951720943234

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 205395171989794. https://doi.org/10.1177/2053951719897945

Griffin, R. (2022). New school speech regulation as a regulatory strategy against hate speech on social media: The case of Germany’s NetzDG. Telecommunications Policy, 46(9), 102411. https://doi.org/10.1016/j.telpol.2022.102411

Hochmann, T. (2022). Hate speech online: The government as regulator and as speaker. Journal of Media Law, 14(1), 139–158. https://doi.org/10.1080/17577632.2022.2085014

Isaac, A., Kumar, R., & Bhat, A. (2022). Hate Speech Detection Using Machine Learning Techniques. In S. Aurelia, S. S. Hiremath, K. Subramanian, & S. Kr. Biswas (Eds.), Sustainable Advanced Computing, vol 840 (pp. 125–135). Springer. https://doi.org/10.1007/978-981-16-9012-9_11

Jørgensen, R. F., & Pedersen, A. M. (2017). Online Service Providers as Human Rights Arbiters. In M. Taddeo, & L. Floridi (Eds.), The Responsibilities of Online Service Providers, vol 31 (pp. 179–199). Springer International Publishing. https://doi.org/10.1007/978-3-319-47852-4_10

Julià-Barceló, R., & Koelman, K. J. (2000). Intermediaty Liability. Computer Law & Security Review, 16(4), 231–239. https://doi.org/10.1016/S0267-3649(00)89129-3

Jütte, B. J. (2022). Poland’s challenge to Article 17 CDSM Directive fails before the CJEU, but Member States must implement fundamental rights safeguards. Journal of Intellectual Property Law & Practice, 17(9), 693–695. https://doi.org/10.1093/jiplp/jpac076

Keller, D. (2020). Facebook Filters, Fundamental Rights, and the CJEU’s Glawischnig-Piesczek Ruling. GRUR International, 69(6), 616–623. https://doi.org/10.1093/grurint/ikaa047

Kikarea, E., & Menashe, M. (2019). The global governance of cyberspace: Reimagining private actors’ accountability: Introduction. Cambridge International Law Journal, 8(2), 153–170. https://doi.org/10.4337/cilj.2019.02.00

Klonick, K. (2018). The New Governors: The People, Rules, and Processes Governing Online Speech. Harvard Law Review, 131(6), 1598–1670. Online: https://tinyurl.com/3y5hvrez

Koebler, J., & Cox, J. (2018, August 23). The impossible job: inside Facebook's struggle to moderate two billion people. Vice. Online: https://cli.re/mrDwrA

Kuczerawy, A. (2020). From ‘Notice and Takedown’ to ‘Notice and Stay Down’: Risks and Safeguards for Freedom of Expression. In G. Frosio (Ed.), Oxford Handbook of Online Intermediary Liability (pp. 523–543). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780198837138.013.27

Lee, H.-E., Ermakova, T., Ververis, V., & Fabian, B. (2020). Detecting child sexual abuse material: A comprehensive survey. Forensic Science International: Digital Investigation, 34, 301022. https://doi.org/10.1016/j.fsidi.2020.301022

Leerssen, P. (2023). An end to shadow banning? Transparency rights in the Digital Services Act between content moderation and curation. Computer Law & Security Review, 48, 105790. https://doi.org/10.1016/j.clsr.2023.105790

Lemmens, K. (2022). Freedom of Expression on the Internet after Sanchez v France: How the European Court of Human Rights Accepts Third-Party ‘Censorship’. European Convention on Human Rights Law Review, 3(4), 525–550. https://doi.org/10.1163/26663236-bja10046

Llansó, E. J. (2020). No amount of “AI” in content moderation will solve filtering’s prior-restraint problem. Big Data & Society, 7(1), 205395172092068. https://doi.org/10.1177/2053951720920686

Mitrakas, A. (2018). The emerging EU framework on cybersecurity certification. Datenschutz Und Datensicherheit – DuD, 42(7), 411–414. https://doi.org/10.1007/s11623-018-0969-2

Molter, S. (2022). Combating hate crime against LGBTIQ* persons. Institute for Social Work and Social Education. Online: https://cli.re/83zrbb

Parsons, C. (2019). The (In)effectiveness of Voluntarily Produced Transparency Reports. Business & Society, 58(1), 103–131. https://doi.org/10.1177/0007650317717957

Paz, M. A., Montero-Díaz, J., & Moreno-Delgado, A. (2020). Hate Speech: A Systematized Review. SAGE Open, 10(4), 215824402097302. https://doi.org/10.1177/2158244020973022

Peršak, N. (2022). Criminalising Hate Crime and Hate Speech at EU Level: Extending the List of Eurocrimes Under Article 83(1) TFEU. Criminal Law Forum, 33(2), 85–119. https://doi.org/10.1007/s10609-022-09440-w

Rauchegger, C., & Kuczerawy, A. (2020). Injunctions to Remove Illegal Online Content under the Ecommerce Directive: Glawischnig-Piesczek. Common Market Law Review, 57(5), 1495–1526. https://doi.org/10.54648/cola2020745

Rojszczak, M. (2023). Gone in 60 Minutes: Distribution of Terrorist Content and Free Speech in the European Union. Democracy and Security, 1–31. https://doi.org/10.1080/17419166.2023.2250731

Romero-Moreno, F. (2020). ‘Upload filters’ and human rights: Implementing Article 17 of the Directive on Copyright in the Digital Single Market. International Review of Law, Computers & Technology, 34(2), 153–182. https://doi.org/10.1080/13600869.2020.1733760

Romero-Moreno, F. (2019). ‘Notice and staydown’ and social media: Amending Article 13 of the Proposed Directive on Copyright. International Review of Law, Computers & Technology, 33(2), 187–210. https://doi.org/10.1080/13600869.2018.1475906

Siegel, A. A. (2020). Online Hate Speech. In N. Persily, & J. A. Tucker (Eds.), Social Media and Democracy: The State of the Field, Prospects for Reform (1st ed.) (pp. 56–88). Cambridge University Press. https://doi.org/10.1017/9781108890960

Spano, R. (2017). Intermediary Liability for Online User Comments under the European Convention on Human Rights. Human Rights Law Review, 17(4), 665–679. https://doi.org/10.1093/hrlr/ngx001

Spindler, G. (2017). Responsibility and Liability of Internet Intermediaries: Status Quo in the EU and Potential Reforms. In T.-E. Synodinou, P. Jougleux, C. Markou, & T. Prastitou (Eds.), EU Internet Law (pp. 289–314). Springer International Publishing. https://doi.org/10.1007/978-3-319-64955-9_12

Spindler, G. (2019). The Liability system of Art. 17 DSMD and national implementation. Journal of Intellectual Property, Information Technology and E-Commerce Law, 10(3), 344–374. Online: https://www.jipitec.eu/issues/jipitec-10-3-2019/5041

Teršek, A. (2020). Common and Comprehensive European Definition of Hate-Speech Alternative Proposal. Open Political Science, 3(1), 213–219. https://doi.org/10.1515/openps-2020-0019

Wang, J. (2018). Notice-and-Takedown Procedures in the US, the EU and China. In J. Wang, Regulating Hosting ISPs’ Responsibilities for Copyright Infringement (pp. 141–178). Springer. https://doi.org/10.1007/978-981-10-8351-8_5

Westerman, D., Spence, P. R., & Van Der Heide, B. (2014). Social Media as Information Source: Recency of Updates and Credibility of Information. Journal of Computer-Mediated Communication, 19(2), 171–183. https://doi.org/10.1111/jcc4.12041

Wilman, F. G. (2022). Two emerging principles of EU internet law: A comparative analysis of the prohibitions of general data retention and general monitoring obligations. Computer Law & Security Review, 46, 105728. https://doi.org/10.1016/j.clsr.2022.105728

Wu, F. T. (2013). Collateral Censorship and the Limits of Intermediary Immunity. Notre Dame Law Review, 87(1), 293–349. Online: https://scholarship.law.nd.edu/ndlr/vol87/iss1/6/

Downloads

Published

2023-12-15

How to Cite

The Digital Services Act and the Problem of Preventive Blocking of (Clearly) Illegal Content. (2023). Institutiones Administrationis - Journal of Administrative Sciences, 3(2), 44-59. https://doi.org/10.54201/iajas.v3i2.85