Human Rights Here blog NNHRR Logo    Asser Logo

“The computer said it was OK!”: human rights (and other) implications of manipulative design (Part 2/2)

“The computer said it was OK!”: human rights (and other) implications of manipulative design


By Dr. Silvia De Conca


Credit: Silva de Conca


This is Part 2 of a two-part series.

On November 19th, 2021, the “Human Rights in the Digital Age” working group of the NNHRR held a multidisciplinary workshop on the legal implications of ‘online manipulation’. This is Part 2 of a two-part series.

Manipulative design, autonomy, and human rights.

By turning individuals into means to an end, manipulative design infringes on their dignity, because it affects their intrinsic value as human beings. Manipulative design is a constraint to individual autonomy, whether it is used for ‘paternalistic’ policymaking or by companies for profit. The very nature of manipulation makes it incompatible with self-determination because manipulation acts beyond the control of the addressees, covertly steering their decision-making processes. Autonomy is one of the values underlying many human rights provisions. The European Court of Human Rights (ECtHR) has consistently affirmed that autonomy is an underlying principle, functional to interpreting some of the guarantees and protections offered by the European Convention for Human Rights (ECHR). This is the case, for instance, of the right to privacy (article 8 ECHR), that has been interpreted as protecting autonomy and self-determination (Pretty v The U.K., 29 April 2002). The right to privacy also protects individual integrity, which includes not just physical aspects, but also autonomy, feelings, self-esteem, and thoughts. Manipulation can potentially infringe upon both autonomy and integrity, as it interferes with the capability of individuals to take a decision and carry it out in the physical world (online or offline) in an independent fashion. 

The ECHR also protects the freedom of thought, conscience, and religion of individuals (article 9). So far, the existing case-law and interpretations of this provisions have focused solely on the religious aspect, discussing the relationship between citizens and the states with regard to adhering to a belief. The debate around article 9 has been focusing more on the freedom of thought only in recent times, following the developments of brain-computer interfaces (BCI) and the possibility for technology to tap into our minds. One of the topics discussed by experts is what happens if BCI enables companies or states to affect and manipulate the thoughts of individuals. In this sense, the widespread use of online manipulation makes this question more pressing. BCI is still in the very early stages, and its capability to affect the thoughts of individuals is  uncertain. Online manipulation, on the contrary, is already here and being used on millions of users of digital products and services. Considering how underdeveloped the interpretation of article 9 ECHR is with regard to freedom of thought, an intervention in this direction of the Council of Europe or of the ECtHR would be auspicable.

The interferences of manipulative design with autonomy are not limited to the individual level: in the medium and long term, the interaction of profiling and manipulative design can pose risks to the very axioms of democracy. Individual autonomy, in fact, is considered also functional to the development of the citizens. Consequently, protecting individual autonomy is fundamental also at a collective level, to fostering a healthy democratic balance.  

Both commercial and public-policy applications of manipulative design have the potential to affect democracy because, in the long term, individuals can lose their decision-making capacity; if individuals lose the ‘practice’ of taking decisions, this can reverberate at the collective level. The Council of Europe has intervened on the matter in its 2019 Declaration by the Committee of Ministers on the manipulative capabilities of algorithmic processes. The declaration contains a recommendation for Member States to regulate persuasion used in combination with AI, to protect the democratic order. First, however, it is necessary to assess where the threshold lies between undesirable and acceptable manipulative design practices. 

Finally, it is also necessary to reflect on the broader implications of manipulation in combination with the entire online architecture that permeates every aspect of our daily lives. Manipulative design leads to a power imbalance between individuals and companies, and citizens and the states. This brings attention to the legitimation of private companies, especially in the cases of public-private partnerships. The online architecture is significantly in the hands of private parties, and this affects how legislative interventions are designed and, most of all, implemented. With the Internet of Things (IoT), the blurring of the boundaries between online and offline dimensions can make manipulative design migrate from websites to our homes and streets. This sheds a new light on the importance of the positive obligations of the states to uphold and foster human rights (such as the abovementioned privacy and freedom of thought, but not only) and shows the necessity for further reflections and investigation.

The author would like to thank student assistants Jorge Constantino and Jade Baltjes for taking notes during the workshop: their excellent notes were of great use while drafting this piece.


Dr. Silvia De Conca is the co-chair of the Human Rights in the Digital Age working group of the Netherlands Network for Human Rights Research. Silvia is Assistant Professor in Law & Technology at the Transnational Legal Studies department of the Vrije Universiteit Amsterdam, and board member of the Amsterdam Law & Technology Institute at VU (ALTI Amsterdam). Her research interests include law of AI and robotics, manipulation online, privacy & data protection.


Add comment