Human Rights Here blog NNHRR Logo    Asser Logo

Platform Liability, Hate Speech, and the Fundamental Right to Free Speech

Credits: The Future of Free Speech

 

By Natalie Alkiviadou

 

Introduction

The rise of social media has fundamentally altered the landscape of information dissemination, bypassing traditional editorial and governmental controls. This has allowed for rapid global information sharing but has also raised concerns about the influence of social media platforms, even in democratic societies. Legislative responses, such as Germany's Network Enforcement Act (NetzDG) of 2017, mandated swift removal of illegal content such as incitement to hatred, defamation of religions and insults, influencing over 20 other nations to enact similar laws. Such forms of regulation often target hate speech but risk suppressing political opposition, particularly in authoritarian regimes. The European Union’s Digital Services Act (DSA) came into force in 2024, imposing stringent removal obligations on platforms. While legislative developments aim to mitigate the adverse impacts of unmoderated online content, they also reveal the delicate balance between preserving freedom of expression and addressing the challenges posed by harmful digital content dissemination. A 2024 report published by the Future of Free Speech found that a substantial majority of deleted comments on Facebook and YouTube in France, Germany, and Sweden were legally permissible, suggesting that platforms may be over-removing content to avoid regulatory penalties. The focus of this report were comments that fall within the ambit of hate speech. Against this backdrop, this short piece will look at some key issues that arise from the current strategies adopted towards moderating online ‘hate speech.’

 

Hate Speech on Social Media Platforms: Semantics and Context

Hate speech exists in a complex nexus between the right to freedom of expression and that of non-discrimination in addition to concepts of dignity, liberty, and equality. There is no universally accepted definition of hate speech, which may result from varying interpretations of free speech and harm across different countries or regions. Recommendation CM/Rec (2022) 16 of the Council of Europe’s Committee of Ministers provides some definitional framework for hate speech, describing it as:

"all types of expression that incite, promote, spread or justify violence, hatred or discrimination against a person or group of persons, or that denigrates them, by reason of their real or attributed personal characteristics or status such as ‘race’, colour, language, religion, nationality, national or ethnic origin, age, disability, sex, gender identity and sexual orientation."

The DSA refers to ‘illegal hate speech’ but does not define it. Meta offers a broad definition of hate speech, noting that hate speech includes ‘direct attacks against people – rather than concepts or institutions – on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease.’ There are additional characteristics such as age and occupation which are considered protected for the purposes of the hate speech clause if they are targeted in combination with another or other protected characteristic(s). Meta extends the definitional framework by referring to a ‘hate speech attack’ which includes, amongst others, dehumanizing speech, cursing and calls for exclusion or segregation. A 2023 report assessing hate speech policies of eight social media platforms found a significant expansion in the scope of these policies, encompassing both the types of content and the protected characteristics.

 

Hate Speech on Social Media Platforms

Social media platforms represent a significant free speech revolution. They reach diverse audiences, disseminate vast amounts of information, cross borders, and enable the expression of opinions. However, some opinions can be hateful, bigoted, and violent. Anonymity and instantaneity can embolden people to express more outrageous, obnoxious, or hateful opinions than they would in real life. Online hate speech is characterized by its ever-changing nature, varying across different times and locations, and evolving in response to contextual developments such as elections, terrorist attacks, and health crises. While the prevailing narrative suggests rampant hate speech on social media, some research indicates its actual prevalence may be lower than commonly perceived. For instance, a study by Siegel et al. which investigated whether Donald Trump's 2016 election campaign and the six months that followed led to an increase in hate speech, found that, on any given day, only 0.001% to 0.003% contained hate speech. In addition, not all hate speech is the same and thus not all responses must fall within the same framework. On one hand, hate speech has incited real-life harm, exemplified by the genocide of Rohingya Muslims in Myanmar. Facebook has been accused of negligently facilitating these atrocities by allegedly amplifying hate speech through its algorithm and failing to remove hateful posts in a timely manner. On the other, we have high removal rates of, even legal content as discussed in the 2024 report published by the Future of Free Speech. Differentiating between such types of content is core in properly dealing with online content which is put under the generic and unclear umbrella of ‘hate speech.’ 

 

Legislation on Platform Liability

Germany’s NetzDG imposed responsibility on major social media platforms to remove reported illegal content, including ‘insult’ and ‘defamation of religions’ within 24 hours under threat of hefty fines. Despite Germany's well-intentioned efforts, the NetzDG approach has inadvertently legitimized and provided a prototype for more speech-restrictive measures by authoritarian regimes. These measures can be exploited to silence critics, marginalize minority groups, and control social media discourse. Article 16 of The Digital Services Act stipulates that ‘providers of hosting services’ which include online platforms must adopt a notice-and-action procedure for removing content including ‘illegal hate speech.’ It provides for a plethora of other duties for platforms. These include, inter alia, issuing transparency reports (Article 24) and maintaining an internal complaints (Article 20) system for users to lodge complaints related to the moderation of content. Platform obligations under the DSA vary according to size. As such, in addition to the above, Very Large Online Platforms (VLOPs) which have 45 million monthly active users in the EU must conduct an annual assessment on any systemic risk stemming from the functioning and use of their services, with assessments covering areas such as the dissemination of illegal content. As well as free speech concerns and the possible ‘Brussels effect’ of this law, the EU's concern about deteriorating rule of law and censorship issues in countries like Hungary makes the DSA problematic for the Union itself.

Scholars like Balkin describe Facebook as a public utility, emphasizing its crucial role in public participation. At the same time, these are private companies, not governed by International Human Rights Law (IHRL) and without direct obligations under Article 19 of the International Covenant on Civil and Political Rights. To add to this, companies face hefty fines under legislation like the NetzDG and the DSA and are tasked with monitoring and removing content deemed hateful, making them arbiters of the limits of free speech. This undermines the legitimacy of human rights protection and allows private companies to remove legal but controversial speech. The Special Rapporteur on Opinion and Expression, Irene Khan, warned that enhanced platform liability might lead intermediaries to over-remove content for fear of sanctions. This was reflected in the aforementioned 2024 report which revealed that the majority of removed content on Facebook and YouTube in France, Germany, and Sweden was, in fact, legally permissible. Depending on the sample, between 87.5% and 99.7% of deleted comments were found to be within legal bounds. Germany showed the highest proportion of legally permissible deletions, with 99.7% on Facebook and 98.9% on YouTube, indicating that the NetzDG might have prompted social media platforms to over-remove content to avoid hefty fines. In Sweden, 94.6% of deleted comments were legally permissible on both Facebook and YouTube. France had the lowest percentages, with 92.1% on Facebook and 87.5% on YouTube. Note that the collection and analysis of comments took place before the enforcement of the DSA.

 

The Use of Artificial Intelligence

Private companies increasingly rely on Artificial Intelligence (AI) for content moderation, responding to vast online content and escalating legislative pressures. In its latest report, Meta’s proactive removal rate for hate speech is 94.7%, indicating reliance on technology to detect and remove violating content before reports. YouTube’s statistics show a high percentage of automated flagging across removed content with 7,996,564 videos removed by automated flagging in comparison to 238,050 which were flagged by users and then removed. While AI is essential in combating issues like Child Sexual Abuse Material, its application in regulating more contentious areas like hate speech is complex. AI's limited ability to understand the nuances of human communication poses risks to free speech, access to information, and equality.

 

Conclusion

The piece seeks to shed light on two significant issues in handling online hate speech. First, imposing on private, profit-driven companies the responsibility to swiftly remove contested speech, including hate speech, poses challenges. Second, these companies' inherent shortcomings in deciphering what constitutes hate speech, especially in grey areas, can lead to over-removal and suppression of legitimate discourse.

The role of intermediaries has evolved from passive platforms to active regulators of speech. States and organizations like the EU have intensified calls for platforms to proactively regulate hate speech, transforming intermediaries into the 'new governors of online speech.' However, these companies lack the judicial authority of courts and the capacity to consistently meet criteria like legality, proportionality, and necessity when determining speech limits. The pressure to swiftly remove content under threat of fines exacerbates the issue, compromising thorough content assessment.

Stakeholders, including policymakers, civil society organizations, and tech platforms, must formulate targeted strategies that address the diverse realities of harmful speech in a way that distinguishes between speech that incites or directly results in real-world violence and speech that, while harmful, remains non-violent in its nature. The complexities of each type of speech and each type of speech and its context demand different regulatory approaches. For example, speech that leads to imminent violence requires swift and decisive intervention, whereas speech that is offensive, discriminatory, or hateful but does not incite immediate violence might require educational, rehabilitative, or community-based responses. In addition, conflict or pre-conflict contexts may require more care than contexts of stability.

In crafting these strategies, decision-makers should be guided by rigorous scientific research on the psychological, social, and political impacts of hate speech consequences of restricting free speech. Evidence-based studies, including those from an array of fields such as social sciences and legal theory, should play a crucial role in shaping such frameworks. A one-size-fits-all solution is not only inadequate but could potentially exacerbate the issue by either over-regulating and stifling legitimate discourse or under-regulating and allowing harmful speech to proliferate unchecked. Moreover, these strategies should be dynamic and adaptable, recognizing that the digital landscape and forms of communication are constantly evolving as do our particular regional and national contexts, all of which requires ongoing reassessment of the balance between regulation and fundamental rights.

 

Bio:

 

Natalie Alkiviadou is a Senior Research Fellow at The Future of Free Speech. Her research interests lie in the freedom of expression, the far-right, hate speech, hate crime and non-discrimination. She holds a PhD (Law) from the Vrije Universiteit Amsterdam. She has published three monographs, namely 'The Far-Right in International and European Law' (Routledge 2019), 'Legal Challenges to the Far-right: Lessons from England and Wales' (Routledge 2019) and 'The Far-Right in Greece and the Law' (Routledge 2022). She has published on hate speech, free speech and the far-right in a wide range of peer reviewed journals, has been reviewer for journals such as the International Journal of Human Rights, The Netherlands Quarterly of Human Rights and guest editor for the International Journal of Semiotics and the Law. Natalie has over ten years experience in working with civil society, educators and public servants on human rights education and has participated in European actions such as the High-Level Group on Combating Racism, Xenophobia and Other Forms of Intolerance. Natalie has been the country researcher for the 2019 European Network against Racism report on Hate Crime and the 2022 report on structural racism. She has drafted handbooks, strategy papers and shadow reports for projects funded by the Anna Lindh Foundation, the European Commission and the European Youth Foundation, on themes such as hate speech. Natalie is an international Fellow (2022/23) of the ISLC – Information Society Law Centre of the Università degli Studi di Milano.

Add comment