Banning Russia Today and Sputnik in Europe is a bad idea

Banning Russia Today and Sputnik in Europe is a bad idea

 

 

Source: https://wordpress.org/openverse/image/0ba994c4-1df9-4667-9904-da741ecb187c/
By Raghav Mendiratta and Natalie Alkiviadou

 

On March 1, 2022, Regulation 2022/350 of the Council of the European Union (EU) suspended broadcasting activities of Russia Today (RT) and Sputnik in the EU until Russia ends the aggression against Ukraine and its media “cease to conduct propaganda actions” against the EU and its Member States. The Regulation (as well as the respective Council Decision) justified this measure on the grounds that Russia has engaged in a "systematic, international campaign of media manipulation and distortion of facts to enhance its strategy of destabilization of its neighboring countries and of the Union and its Member States".

On March 4, in an email addressed to Google and cataloged on the Lumen database, the European Commission required that search engines such as Google must delist RT and Sputnik. It further states that social media “must prevent users from broadcasting (lato sensu) any content of RT and Sputnik” while accounts belonging to the two or their affiliates must be suspended. Posts made by individuals that reproduce RT and Sputnik content must not be published and if they are, must be deleted. This is a particularly broad interpretation of the Regulation and imposes a general monitoring obligation on operators that might be disproportional. A general monitoring obligation is contrary to the doctrine of conditional liability attached to the E-Commerce Directive and the proposed text of the Digital Services Act (DSA).   This means that companies are not obligated to enforce measures for purposes of blanket monitoring of user content. Instead, companies are liable for illegal content that they are made aware of. In the DSA, this takes the form of a “notice and take down” regime.

The Limited Effect of Propaganda and the Legitimacy of the Ban

RT and Sputnik are both closely linked to the Kremlin. Sputnik was created by a Presidential decree with the aim to “report on the state policy of Russia abroad”. RT is fully financed by the Russian government and is included in an official list of core organizations of strategic importance to Russia. They have both been documented in spreading disinformation in Europe on various instances.  However, banning them entirely deviates from the standard European approach to the handling of disinformation, which does not include blanket bans and removals. In fact, the proposed text of the DSA stipulates that in “extraordinary circumstances” including war, where the online environment may be misused for the rapid spread of illegal content or disinformation, the European Commission may initiate the drawing up of voluntary crisis protocols to coordinate a response in the online environment. Such protocols may include measures which are “strictly necessary to address the extraordinary circumstance” and “must not amount to a general obligation for…very large online platforms to monitor the information”.

Further, the approach to RT and Sputnik is misplaced and does not reflect empirical and socio-political realities. Empirical data increasingly suggests that the perception of social media being awash in misinformation is exaggerated.

Considering an evidence-based approach on the limited impact of propaganda, the role of counter-narratives, and the over-zealous nature of the measures, it is doubtful whether the Regulation and the Commission’s interpretation are compatible with International Human Rights Law (IHRL). Article 20(1) of the International Covenant on Civil and Political Rights (ICCPR) prohibits any propaganda for war. However, General Comment 34 of the Human Rights Committee highlights that any legal prohibitions arising from Article 20 must be justified and be in strict conformity with Article 19 which provides for freedom of expression. Restrictions under Article 19 can only be legitimate if theymeet the strict tests of necessity and proportionality which entail an immediate and direct connection between the speech and the threat whilst measures must be “the least intrusive instruments” to achieve the legitimate aim pursued.  These could include labelling or downranking or tech-oriented solutions to prevent virality. However, we are confronted with a lack of a substantive and evidence-based justification for the above blanket ban of RT and Sputnik which, subsequently, does not reflect a direct link with the aim pursued and is over-broad and over-intrusive.

Further, the approach prevents users to engage with such content for political discourse or counter-narratives and sets a negative precedent for social media platforms by signaling those blanket removals of content are necessary and proportional. Moreover, thanks to the Brussels Effect and the digitalization of the world stage, such measures could ultimately lead to widespread censorship not just in the EU but also in other parts of the world, stifling discussion of issues of public interest as well as criticism of governments. Further, it further opens the floodgates for authoritarian leaders around the world including Putin himself to cite this as precedent for the censoring of content in their own countries. Unsurprisingly, in the days following the EU Regulation of March 1, Russia decided to ban numerous Western media outlets including BBC, Deutsche Welle, Euronews, and others. 

RT (France) has challenged the sanctions before the EU’s General Court, which will be called to make significant calls on the current situation but, also, on the future of free speech in the union. Any outcome will be a double-edged sword. On a political level, a judgment in favor of RT (France) would give an immense boost to Putin's propaganda machine. A judgment in favor of the restrictive measures would further deteriorate the already vulnerable position which freedom of expression finds itself at and further dilute central rule of law doctrines such as proportionality, necessity, and transparency.  

A dangerous card from the authoritarian’s playbook

In light of the above, we argue that the conformity of the ban with IHRL is dubious while it goes against the general position of the EU towards handling disinformation. Moreover, letting RT and Sputnik run its course unfettered thereby allowing political discourse and, most importantly, counter-narratives would be more effective to tackle the problems associated with these two outlets. As noted in her report on disinformation, the UN’s Special Rapporteur on the Freedom of Opinion and Expression “attempts to combat disinformation by undermining human rights are short-sighted and counterproductive”.  Freedom of expression must be given the position it deserves in times of peace and war. There ought to be space to allow war critics, ordinary citizens, and scholars to debunk myths and counter disinformation. If the EU does not reverse its free speech path in the discussed cases of RT and Sputnik, it may be playing a dangerous card from an authoritarian’s playbook.

Bios: 

Natalie Alkiviadou is senior research fellow at the Future of Free Speech Project at Justitia.

Raghav Mendiratta  is a tech policy counsel and a Legal Fellow at the Future of Free Speech Project (Justitia and Columbia University, New York) 

 

 

 

 

 

Automated Content Moderation, Hate Speech and Human Rights

Automated Content Moderation, Hate Speech and Human Rights 

by Natalie Alkiviadou


Source: mikemacmarketing

Within the framework of a multi-stakeholder, cross-border, EU project entitled SHERPA ‘Shaping the Ethical Dimensions of Smart Information Systems (SIS)’, a project led by the University of De Montfort (UK), a deliverable was developed on 11 specific challenges that SIS (the combination of artificial intelligence and big data analytics) raise with regards to human rights. This blog post seeks to focus on one of those challenges, namely ‘Democracy, Freedom of Thought, Control and Manipulation.’ This challenge considered, amongst others, the impact of SIS on freedom of expression, bias and discrimination. Building on the initial findings of that report, this short piece will examine the use of Artificial Intelligence (AI) in the process of content moderation of online hate speech. This particular challenge was chosen given that the moderation of online hate speech is a hot potato for social media platforms, States and other stakeholders such as the European Union. Recent developments such as the EU’s Digital Services Act and the proposed Artificial Intelligence Act seeking to cultivate new grounds through which online content and the use of AI will be managed.

Online communication occurs on a “massive scale”, rendering it impossible for human moderators to review all content before it is made available. The sheer quantity of online content also makes the job of reviewing, even reported content, a difficult task. As a response, social media platforms are depending, more and more, on AI in the form of automated mechanisms that proactively or reactively tackle problematic content, including hate speech. Technologies handling content such as hate speech is still in its “infancy”. The algorithms developed to achieve this automation are habitually customized for content type, such as pictures, videos, audio and text. The use of AI is not only a response to issues of quantity but also to increasing State pressure on social media platforms to remove hate speech quickly and efficiently. Examples of such pressure include, inter alia, the German NetzDG which requests large social media platforms to remove reported content that is deemed illegal under the German Penal Code and to do so quickly (sometimes within 24 hours) and at risk of heavy fines (up to 50 million Euros in certain cases). To be able to comply with such standards, companies use AI alone or in conjunction with human moderation to remove allegedly hateful content. As noted by Oliva, such circumstances have prompted companies to “act proactively in order to avoid liability…in an attempt to protect their business models”. Gorwa, Binns and Katzenbach highlight that as “government pressure on major technology companies build, both firms and legislators are searching for technical solutions to difficult platform governance puzzles such as hate speech and misinformation”. Further, the “work from home” Covid-19 situation has also led to enhanced reliance on AI accompanied by errors in moderation. In fact, as noted, for example, by YouTube, due to COVID-19, the amount of in-office staff has been reduced meaning that the company temporarily relies on more technology for content review and that this could lead to errors in content removals.

Over-blocking and Freedom of Expression

Relying on AI, even without human supervision can be supported when it comes to content that could never be ethically or legally justifiable, such as child abuse. However, the issue becomes more complicated when it comes to areas which are contested, with little or complicated legal (or ethical) clarification on what should actually be allowed (and what not) – such as hate speech. In the ambit of such speech, Llansó states that the use of these technologies raises “significant questions about the influence of AI on our information environment and, ultimately, on our rights to freedom of expression and access to information”. For example, YouTube wrongly shut down (then reinstated) an independent news agency reporting war crimes in Syria. Several videos were wrongly flagged as inappropriate by an automatic system designed to identify extremist content. Other hash matching technologies such as PhotoDNA also seem to operate in “context blindness” that could be the reason for the removal of the videos on Syria. YouTube subsequently reinstated thousands of the videos which had been wrongly removed.

As highlighted in a Council of Europe report, automated mechanisms directly impact the freedom of expression, which raises concerns vis-à-vis the rule of law and, in particular, notions of legality, legitimacy and proportionality. The Council of Europe noted that the enhanced use of AI for content moderation may result in over-blocking and consequently place the freedom of expression at risk. Beyond that, Gorwa, Binns and Katzenbach argue that the increased use of AI threatens to exacerbate already existing opacity of content moderation, further perplex the issue of justice online and “re-obscure the fundamentally political nature of speech decisions being executed at scale”. Automated mechanisms fundamentally lack the ability to comprehend the nuance and context of language and human communication. The following section provides an example of how automated mechanisms may become inherently biased and further lead to concerns relating to respect for the right to non-discrimination.

The Issue of Bias and Non-Discrimination

AI can be infiltrated with biases at the stage of design or enforcement. In its report ‘Mixed Messages: The Limits of Automated Social Content Analysis’, the Centre for Democracy and Technology revealed that automated mechanisms may disproportionately impact the speech of marginalized groups. Although technologies such as natural language processing and sentiment analysis have been developed to detect harmful text without having to rely on specific words/phrases, research has shown that they are “still far from being able to grasp context or to detect the intent or motivation of the speaker”. Such technologies are just not cut out to pick up on the language used, for example, by the LGBTQ community whose “mock impoliteness” and use of terms such as “dyke,” “fag” and “tranny” occurs as a form of reclamation of power and a means to prepare members of this community to  empower themselves to deal with hatred and discrimination.

Through the conscious or unconscious biases that mark the automated mechanisms moderating content as depicted in the above examples, the use of AI for online hate speech therefore leads not only to an infringement of freedom of expression due to over-blocking and silencing dissenting voices as demonstrated above but, also, shrinks the space for minority groups such as the LGBTQ community. This shrinking space that results from inherent bias therefore leads to a violation of the fundamental doctrine of international human rights law, namely that of non-discrimination.

Concluding Comments

As noted by Llansó, the above issues cannot be tackled with more sophisticated AI. Tackling hate speech by relying on AI without human oversight (to say the least) and doing it proactively and not only reactively, places the freedom of expression in a fragile position. At the same time, the inability of technologies to pick up on the nuances of human communication in addition to the biases that have affected the make-up and functioning of such technologies brings up issues pertaining to the doctrine of non-discrimination.

Bio:

Natalie Alkiviadou is a Senior Research Fellow at Justitia, Denmark.

God “does not and cannot bless sin” - Hate Speech Laws: Quo Vadis?

God “does not and cannot bless sin” - Hate Speech Laws: Quo Vadis?

By Natalie Alkiviadou
Source: "All shall be equal before the law: justice graffiti in Cape Town, South Africa" by Ben Sutherland is licensed under CC BY 2.0

On the 15th March 2021, the Vatican issued a statement (approved by the Pope) which notes that the Catholic church would not bless same-sex unions, referring to them as “sinful” and underlining that God “does not and cannot bless sin.” This statement gives food for thought (for a lot of different things) but also for the issue of hate speech regulation. In this ambit, this short piece will consider the issue of such regulation as well as the position of the European Court of Human Rights (ECtHR) to speech against the LGBT community. More...

Decent work for Migrant Domestic Workers: An Unrealised Promise?

Decent work for Migrant Domestic Workers: An Unrealised Promise?

By Natalie Alkiviadou

"International Slavery Museum - Albert Dock - Liverpool - Legacies of slavery - Migrant domestic workers and Kalayaan at the May Day Rally 2007" by ell brown 

Introduction

In 2019, the International Labour Organization (ILO) estimated that there were approximately 11.5 million migrant domestic workers (MDWs) around the globe, 8.5 million of whom are women.  The entrance of women into the labour market and ageing populations have been central factors contributing to the rise in the demand of cheap female migrant domestic workers (FMDWs). FMDWs fill the gaps in ineffective systems of social welfare, which cannot support, inter alia, an ageing population. In this ambit, FMDWs are caught at the ‘the intersection of care work exploitation with gender, ethnic and migrant oppression in the context of a globalising world.’ This piece is based on research conducted on the situation of FMDWs in Cyprus and seeks to set out the international legal framework that exists to protect this group of workers.  

More...