“The computer said it was OK!”: human rights (and other) implications of manipulative design (Part 2/2)

“The computer said it was OK!”: human rights (and other) implications of manipulative design

 

By Dr. Silvia De Conca

 

 
Credit: Silva de Conca

 

This is Part 2 of a two-part series.

On November 19th, 2021, the “Human Rights in the Digital Age” working group of the NNHRR held a multidisciplinary workshop on the legal implications of ‘online manipulation’. This is Part 2 of a two-part series.

Manipulative design, autonomy, and human rights.

By turning individuals into means to an end, manipulative design infringes on their dignity, because it affects their intrinsic value as human beings. Manipulative design is a constraint to individual autonomy, whether it is used for ‘paternalistic’ policymaking or by companies for profit. The very nature of manipulation makes it incompatible with self-determination because manipulation acts beyond the control of the addressees, covertly steering their decision-making processes. Autonomy is one of the values underlying many human rights provisions. The European Court of Human Rights (ECtHR) has consistently affirmed that autonomy is an underlying principle, functional to interpreting some of the guarantees and protections offered by the European Convention for Human Rights (ECHR). This is the case, for instance, of the right to privacy (article 8 ECHR), that has been interpreted as protecting autonomy and self-determination (Pretty v The U.K., 29 April 2002). The right to privacy also protects individual integrity, which includes not just physical aspects, but also autonomy, feelings, self-esteem, and thoughts. Manipulation can potentially infringe upon both autonomy and integrity, as it interferes with the capability of individuals to take a decision and carry it out in the physical world (online or offline) in an independent fashion. 

The ECHR also protects the freedom of thought, conscience, and religion of individuals (article 9). So far, the existing case-law and interpretations of this provisions have focused solely on the religious aspect, discussing the relationship between citizens and the states with regard to adhering to a belief. The debate around article 9 has been focusing more on the freedom of thought only in recent times, following the developments of brain-computer interfaces (BCI) and the possibility for technology to tap into our minds. One of the topics discussed by experts is what happens if BCI enables companies or states to affect and manipulate the thoughts of individuals. In this sense, the widespread use of online manipulation makes this question more pressing. BCI is still in the very early stages, and its capability to affect the thoughts of individuals is  uncertain. Online manipulation, on the contrary, is already here and being used on millions of users of digital products and services. Considering how underdeveloped the interpretation of article 9 ECHR is with regard to freedom of thought, an intervention in this direction of the Council of Europe or of the ECtHR would be auspicable.

The interferences of manipulative design with autonomy are not limited to the individual level: in the medium and long term, the interaction of profiling and manipulative design can pose risks to the very axioms of democracy. Individual autonomy, in fact, is considered also functional to the development of the citizens. Consequently, protecting individual autonomy is fundamental also at a collective level, to fostering a healthy democratic balance.  

Both commercial and public-policy applications of manipulative design have the potential to affect democracy because, in the long term, individuals can lose their decision-making capacity; if individuals lose the ‘practice’ of taking decisions, this can reverberate at the collective level. The Council of Europe has intervened on the matter in its 2019 Declaration by the Committee of Ministers on the manipulative capabilities of algorithmic processes. The declaration contains a recommendation for Member States to regulate persuasion used in combination with AI, to protect the democratic order. First, however, it is necessary to assess where the threshold lies between undesirable and acceptable manipulative design practices. 

Finally, it is also necessary to reflect on the broader implications of manipulation in combination with the entire online architecture that permeates every aspect of our daily lives. Manipulative design leads to a power imbalance between individuals and companies, and citizens and the states. This brings attention to the legitimation of private companies, especially in the cases of public-private partnerships. The online architecture is significantly in the hands of private parties, and this affects how legislative interventions are designed and, most of all, implemented. With the Internet of Things (IoT), the blurring of the boundaries between online and offline dimensions can make manipulative design migrate from websites to our homes and streets. This sheds a new light on the importance of the positive obligations of the states to uphold and foster human rights (such as the abovementioned privacy and freedom of thought, but not only) and shows the necessity for further reflections and investigation.

The author would like to thank student assistants Jorge Constantino and Jade Baltjes for taking notes during the workshop: their excellent notes were of great use while drafting this piece.

Bio:

Dr. Silvia De Conca is the co-chair of the Human Rights in the Digital Age working group of the Netherlands Network for Human Rights Research. Silvia is Assistant Professor in Law & Technology at the Transnational Legal Studies department of the Vrije Universiteit Amsterdam, and board member of the Amsterdam Law & Technology Institute at VU (ALTI Amsterdam). Her research interests include law of AI and robotics, manipulation online, privacy & data protection.

 

Artificial Intelligence & Human Rights: Friend or Foe?

Artificial Intelligence & Human Rights: Friend or Foe?

 

By Alberto Quintavalla and Jeroen Temperman

 

Picture connected (Creative Commons licenses)
Source: https://www.techslang.com/ai-and-human-rights-are-they-related/

 

The problem

Artificial intelligence (‘AI’) applications can have a significant impact on human rights. This impact can be twofold. On the one hand, it may contribute to the advancement of human rights. A striking example is the use of machine learning in healthcare to improve precision medicine so that patients would receive better care. On the other hand, it can pose an obvious risk to the respect of human rights. Unfortunately, there are countless examples. Perhaps the most obvious one is the use of algorithms discriminating against ethnic minorities and women.

The call

It is in this context that international and national institutions are calling for further reflection on the prospective impact of AI. These calls are especially advanced at the European level, including the active involvement of the Council of Europe. Time is in fact ripe to start mapping the risks that AI applications could have on human rights and, subsequently, to develop an effective legal and policy framework in response to these risks.

The event

On 28 October 2021, the hybrid workshop ‘AI & Human Rights: Friend or Foe?’ took place. On this occasion, several researchers from around the world met to discuss the prospective impact of AI on human rights. The event was organized by the Erasmus School of Law, and benefitted from the sponsorship of both the Netherlands Network for Human Rights Research and the Jean Monnet Centre of Excellence on Digital Governance.

Zooming out: the common theme(s)

The workshop consisted of various presentations, each addressing specific instances of the complex interaction between AI and human rights. Nonetheless, the discussion with the audience highlighted two common challenges in dealing with the prospective impact of AI on human rights. Firstly, the recourse to market mechanisms or the use of regulatory instruments aiming at changing individuals’ economic incentives (and, accordingly, behaviour) are not sufficient to address the issues presented by the use of AI. Regulation laying down a comprehensive set of rules applicable to the development and deployment of all AI applications is necessary to fill the existing regulatory gaps and safeguard fundamental rights. This is in line with the EU Commission’s recent proposal setting out harmonized rules for AI, including the need to subject the so-called high-risk AI systems to strict obligations prior to their market entry. Secondly, and relatedly, the development of international measures is not enough to ensure the effective management with local issues and delineate regulation that are responsive to the particular circumstances. Society should regularly look at the context where the emerging issues unfold. The deployment of AI systems is in fact designed to operate in culturally different environments, each one with specific local features. 

Zooming in: the various panels

The remaining part of this blog post provides a short overview of the more specific arguments and considerations presented during the workshop. The workshop consisted of five panels. The first panel revolved around questions of AI and content moderation, biometric technologies, and facial recognition. The discussion emphasized major privacy concerns as well as the chilling effects on free speech and the freedom of association in this area. The second panel, among other issues, continued the content moderation discussion by arguing that the risks of deploying AI-based technologies can be complemented by the human rights potential thereof in terms of combating hateful speech. Moreover, the dynamics between AI and human rights were assessed through the lenses of data analytics, machine learning, and regulatory sandboxes. The third panel aimed to complement the conventional discussions on AI and human rights by focusing on the contextual and institutional dimensions. Specifically, it stressed the relevance of integrating transnational standards into the regulatory environments at lower governance levels since they tend to take more heed of citizens’ preferences, the expanding role of automation in administrative decision-making and the associated risk of not receiving effective remedy, the ever-increasing role of AI-driven applications in business practices and the need for protecting consumers from (e.g.) distortion of their personal autonomy or indirect discrimination, as well as the impact that AI applications can have on workers’ human rights in the workplace. These presentations yielded a broader discussion on the need to ensure a reliable framework of digital governance to protect the vulnerability of human beings as they adopt specific roles (i.e., citizens, consumers, and workers). The fourth panel further expanded the analysis on how AI may expose individuals and groups to other risks when they are in particular situations that have so far been overlooked by current scholarship. Specifically, it discussed the right to freedom of religion or belief, the right to be ignored in public spaces, and the use of AI during the pandemic and its impact on human rights implementation. All of the three presentations stressed that AI surveillance is an important facet that should be targeted by regulatory efforts. Lastly, the fifth panel ventured into a number of specific human rights and legal issues raised as a result of the interplay between AI and the rights of different minority groups such as refugees, LGBTQI and women. The discussion mostly revolved around the serious discriminatory harm that the use of AI applications can result in. References have been made, in particular, to bias in the training data employed by AI systems as well as the underrepresentation of minority groups in the technology sector.

A provisional conclusion

The discussion during the workshop showed that the startling increase in AI applications poses significant threats to several human rights. These threats are however not yet entirely spelled out. Efforts of policymakers and academic research should then be directed to pinpoint what are the specific threats that would emerge as a result of AI deployment. Only then, will it be possible to develop a legal and policy framework that would respond to the posed threats and ensure sufficient protection of fundamental rights. Admittedly, this framework will need to grant some dose of discretion to lower governance levels so that it would be possible to integrate context-specific factors. On a more positive note, the presentations from the workshop emphasized that AI applications can also be employed as a means of protecting fundamental rights.

Bio:

 

Jeroen Temperman is Professor of International Law and Head of the Law & Markets Department at Erasmus School of Law, Erasmus University, Rotterdam, Netherlands. He is also the Editor-in-Chief of Religion & Human Rights and a member of the Organization for Security and Cooperation in Europe’s Panel of Experts on Freedom of Religion or Belief. He has authored, among other books, Religious Hatred and International Law (Cambridge: Cambridge University Press, 2016) and State–Religion Relationships and Human Rights Law (Leiden: Martinus Nijhoff, 2010) and edited Blasphemy and Freedom of Expression (Cambridge: Cambridge University Press, 2017) and The Lautsi Papers (Leiden: Martinus Nijhoff, 2012).