Artificial Intelligence & Human Rights: Friend or Foe?

Artificial Intelligence & Human Rights: Friend or Foe?

 

By Alberto Quintavalla and Jeroen Temperman

Picture connected (Creative Commons licenses)
Source: https://www.techslang.com/ai-and-human-rights-are-they-related/

 

The problem

Artificial intelligence (‘AI’) applications can have a significant impact on human rights. This impact can be twofold. On the one hand, it may contribute to the advancement of human rights. A striking example is the use of machine learning in healthcare to improve precision medicine so that patients would receive better care. On the other hand, it can pose an obvious risk to the respect of human rights. Unfortunately, there are countless examples. Perhaps the most obvious one is the use of algorithms discriminating against ethnic minorities and women.

The call

It is in this context that international and national institutions are calling for further reflection on the prospective impact of AI. These calls are especially advanced at the European level, including the active involvement of the Council of Europe. Time is in fact ripe to start mapping the risks that AI applications could have on human rights and, subsequently, to develop an effective legal and policy framework in response to these risks.

The event

On 28 October 2021, the hybrid workshop ‘AI & Human Rights: Friend or Foe?’ took place. On this occasion, several researchers from around the world met to discuss the prospective impact of AI on human rights. The event was organized by the Erasmus School of Law, and benefitted from the sponsorship of both the Netherlands Network for Human Rights Research and the Jean Monnet Centre of Excellence on Digital Governance.

Zooming out: the common theme(s)

The workshop consisted of various presentations, each addressing specific instances of the complex interaction between AI and human rights. Nonetheless, the discussion with the audience highlighted two common challenges in dealing with the prospective impact of AI on human rights. Firstly, the recourse to market mechanisms or the use of regulatory instruments aiming at changing individuals’ economic incentives (and, accordingly, behaviour) are not sufficient to address the issues presented by the use of AI. Regulation laying down a comprehensive set of rules applicable to the development and deployment of all AI applications is necessary to fill the existing regulatory gaps and safeguard fundamental rights. This is in line with the EU Commission’s recent proposal setting out harmonized rules for AI, including the need to subject the so-called high-risk AI systems to strict obligations prior to their market entry. Secondly, and relatedly, the development of international measures is not enough to ensure the effective management with local issues and delineate regulation that are responsive to the particular circumstances. Society should regularly look at the context where the emerging issues unfold. The deployment of AI systems is in fact designed to operate in culturally different environments, each one with specific local features. 

Zooming in: the various panels

The remaining part of this blog post provides a short overview of the more specific arguments and considerations presented during the workshop. The workshop consisted of five panels. The first panel revolved around questions of AI and content moderation, biometric technologies, and facial recognition. The discussion emphasized major privacy concerns as well as the chilling effects on free speech and the freedom of association in this area. The second panel, among other issues, continued the content moderation discussion by arguing that the risks of deploying AI-based technologies can be complemented by the human rights potential thereof in terms of combating hateful speech. Moreover, the dynamics between AI and human rights were assessed through the lenses of data analytics, machine learning, and regulatory sandboxes. The third panel aimed to complement the conventional discussions on AI and human rights by focusing on the contextual and institutional dimensions. Specifically, it stressed the relevance of integrating transnational standards into the regulatory environments at lower governance levels since they tend to take more heed of citizens’ preferences, the expanding role of automation in administrative decision-making and the associated risk of not receiving effective remedy, the ever-increasing role of AI-driven applications in business practices and the need for protecting consumers from (e.g.) distortion of their personal autonomy or indirect discrimination, as well as the impact that AI applications can have on workers’ human rights in the workplace. These presentations yielded a broader discussion on the need to ensure a reliable framework of digital governance to protect the vulnerability of human beings as they adopt specific roles (i.e., citizens, consumers, and workers). The fourth panel further expanded the analysis on how AI may expose individuals and groups to other risks when they are in particular situations that have so far been overlooked by current scholarship. Specifically, it discussed the right to freedom of religion or belief, the right to be ignored in public spaces, and the use of AI during the pandemic and its impact on human rights implementation. All of the three presentations stressed that AI surveillance is an important facet that should be targeted by regulatory efforts. Lastly, the fifth panel ventured into a number of specific human rights and legal issues raised as a result of the interplay between AI and the rights of different minority groups such as refugees, LGBTQI and women. The discussion mostly revolved around the serious discriminatory harm that the use of AI applications can result in. References have been made, in particular, to bias in the training data employed by AI systems as well as the underrepresentation of minority groups in the technology sector.

A provisional conclusion

The discussion during the workshop showed that the startling increase in AI applications poses significant threats to several human rights. These threats are however not yet entirely spelled out. Efforts of policymakers and academic research should then be directed to pinpoint what are the specific threats that would emerge as a result of AI deployment. Only then, will it be possible to develop a legal and policy framework that would respond to the posed threats and ensure sufficient protection of fundamental rights. Admittedly, this framework will need to grant some dose of discretion to lower governance levels so that it would be possible to integrate context-specific factors. On a more positive note, the presentations from the workshop emphasized that AI applications can also be employed as a means of protecting fundamental rights.

Bio:

 

Jeroen Temperman is Professor of International Law and Head of the Law & Markets Department at Erasmus School of Law, Erasmus University, Rotterdam, Netherlands. He is also the Editor-in-Chief of Religion & Human Rights and a member of the Organization for Security and Cooperation in Europe’s Panel of Experts on Freedom of Religion or Belief. He has authored, among other books, Religious Hatred and International Law (Cambridge: Cambridge University Press, 2016) and State–Religion Relationships and Human Rights Law (Leiden: Martinus Nijhoff, 2010) and edited Blasphemy and Freedom of Expression (Cambridge: Cambridge University Press, 2017) and The Lautsi Papers (Leiden: Martinus Nijhoff, 2012).

 

 

Alberto Quintavalla is Assistant Professor at the Department of Law & Markets at Erasmus School of Law (Erasmus University Rotterdam) and affiliated researcher at the Jean Monnet Centre of Excellence on Digital Governance. He received his doctoral degree at Erasmus Universiteit in 2020 with research about water governance from the Rotterdam Institute of Law & Economics and the Department of International and European Union Law. He has been a visiting researcher at the Hebrew University of Jerusalem and the European University Institute. His research interests are at the intersection of environmental governance, human rights, and digital technologies. He is admitted to the Italian Bar.

 

Automated Content Moderation, Hate Speech and Human Rights

Automated Content Moderation, Hate Speech and Human Rights 

by Natalie Alkiviadou

Source: mikemacmarketing

Within the framework of a multi-stakeholder, cross-border, EU project entitled SHERPA ‘Shaping the Ethical Dimensions of Smart Information Systems (SIS)’, a project led by the University of De Montfort (UK), a deliverable was developed on 11 specific challenges that SIS (the combination of artificial intelligence and big data analytics) raise with regards to human rights. This blog post seeks to focus on one of those challenges, namely ‘Democracy, Freedom of Thought, Control and Manipulation.’ This challenge considered, amongst others, the impact of SIS on freedom of expression, bias and discrimination. Building on the initial findings of that report, this short piece will examine the use of Artificial Intelligence (AI) in the process of content moderation of online hate speech. This particular challenge was chosen given that the moderation of online hate speech is a hot potato for social media platforms, States and other stakeholders such as the European Union. Recent developments such as the EU’s Digital Services Act and the proposed Artificial Intelligence Act seeking to cultivate new grounds through which online content and the use of AI will be managed.

Online communication occurs on a “massive scale”, rendering it impossible for human moderators to review all content before it is made available. The sheer quantity of online content also makes the job of reviewing, even reported content, a difficult task. As a response, social media platforms are depending, more and more, on AI in the form of automated mechanisms that proactively or reactively tackle problematic content, including hate speech. Technologies handling content such as hate speech is still in its “infancy”. The algorithms developed to achieve this automation are habitually customized for content type, such as pictures, videos, audio and text. The use of AI is not only a response to issues of quantity but also to increasing State pressure on social media platforms to remove hate speech quickly and efficiently. Examples of such pressure include, inter alia, the German NetzDG which requests large social media platforms to remove reported content that is deemed illegal under the German Penal Code and to do so quickly (sometimes within 24 hours) and at risk of heavy fines (up to 50 million Euros in certain cases). To be able to comply with such standards, companies use AI alone or in conjunction with human moderation to remove allegedly hateful content. As noted by Oliva, such circumstances have prompted companies to “act proactively in order to avoid liability…in an attempt to protect their business models”. Gorwa, Binns and Katzenbach highlight that as “government pressure on major technology companies build, both firms and legislators are searching for technical solutions to difficult platform governance puzzles such as hate speech and misinformation”. Further, the “work from home” Covid-19 situation has also led to enhanced reliance on AI accompanied by errors in moderation. In fact, as noted, for example, by YouTube, due to COVID-19, the amount of in-office staff has been reduced meaning that the company temporarily relies on more technology for content review and that this could lead to errors in content removals.

Over-blocking and Freedom of Expression

Relying on AI, even without human supervision can be supported when it comes to content that could never be ethically or legally justifiable, such as child abuse. However, the issue becomes more complicated when it comes to areas which are contested, with little or complicated legal (or ethical) clarification on what should actually be allowed (and what not) – such as hate speech. In the ambit of such speech, Llansó states that the use of these technologies raises “significant questions about the influence of AI on our information environment and, ultimately, on our rights to freedom of expression and access to information”. For example, YouTube wrongly shut down (then reinstated) an independent news agency reporting war crimes in Syria. Several videos were wrongly flagged as inappropriate by an automatic system designed to identify extremist content. Other hash matching technologies such as PhotoDNA also seem to operate in “context blindness” that could be the reason for the removal of the videos on Syria. YouTube subsequently reinstated thousands of the videos which had been wrongly removed.

As highlighted in a Council of Europe report, automated mechanisms directly impact the freedom of expression, which raises concerns vis-à-vis the rule of law and, in particular, notions of legality, legitimacy and proportionality. The Council of Europe noted that the enhanced use of AI for content moderation may result in over-blocking and consequently place the freedom of expression at risk. Beyond that, Gorwa, Binns and Katzenbach argue that the increased use of AI threatens to exacerbate already existing opacity of content moderation, further perplex the issue of justice online and “re-obscure the fundamentally political nature of speech decisions being executed at scale”. Automated mechanisms fundamentally lack the ability to comprehend the nuance and context of language and human communication. The following section provides an example of how automated mechanisms may become inherently biased and further lead to concerns relating to respect for the right to non-discrimination.

The Issue of Bias and Non-Discrimination

AI can be infiltrated with biases at the stage of design or enforcement. In its report ‘Mixed Messages: The Limits of Automated Social Content Analysis’, the Centre for Democracy and Technology revealed that automated mechanisms may disproportionately impact the speech of marginalized groups. Although technologies such as natural language processing and sentiment analysis have been developed to detect harmful text without having to rely on specific words/phrases, research has shown that they are “still far from being able to grasp context or to detect the intent or motivation of the speaker”. Such technologies are just not cut out to pick up on the language used, for example, by the LGBTQ community whose “mock impoliteness” and use of terms such as “dyke,” “fag” and “tranny” occurs as a form of reclamation of power and a means to prepare members of this community to  empower themselves to deal with hatred and discrimination.

Through the conscious or unconscious biases that mark the automated mechanisms moderating content as depicted in the above examples, the use of AI for online hate speech therefore leads not only to an infringement of freedom of expression due to over-blocking and silencing dissenting voices as demonstrated above but, also, shrinks the space for minority groups such as the LGBTQ community. This shrinking space that results from inherent bias therefore leads to a violation of the fundamental doctrine of international human rights law, namely that of non-discrimination.

Concluding Comments

As noted by Llansó, the above issues cannot be tackled with more sophisticated AI. Tackling hate speech by relying on AI without human oversight (to say the least) and doing it proactively and not only reactively, places the freedom of expression in a fragile position. At the same time, the inability of technologies to pick up on the nuances of human communication in addition to the biases that have affected the make-up and functioning of such technologies brings up issues pertaining to the doctrine of non-discrimination.

Bio: