“The computer said it was OK!”: human rights (and other) implications of manipulative design
By Dr. Silvia De Conca
Credit: Silvia de Conca
This is Part 1 of a two-part series.
On November 19th, 2021, the “Human Rights in the Digital Age” working group of the NNHRR held a multidisciplinary workshop on the legal implications of ‘online manipulation’. The term ‘online manipulation’ indicates online services designed to manipulate individuals into clicking, buying, or sharing more (hereinafter also manipulative design).
Manipulation is not a new phenomenon. From spreading a fragrance in the store to increase sales, to displaying a 9,99 Euro price instead of a round 10, marketing practices that might covertly influence users into buying products are widespread. Some of these marketing practices are prohibited or heavily regulated at national and European Union level, following the development of consumer protection (think, for instance, of the infamous subliminal messages in TV advertising, or of unfair practices, such as misleading information about a product).
For the past three decades, however, these marketing practices have been married with computers, the Internet, and now Artificial Intelligence (AI), in a perfect mix of invisibility, pervasiveness, and automation. This development is beginning to catch the attention of lawmakers in the USA and in Europe, who wonder what the relationship between traditional forms of manipulation and online manipulation is, and how to regulate the latter. In the current European legislation, in fact, it is not clear if and how the existing rules for consumer protection apply to online manipulation. Similarly, individuals might be protected from certain manipulative designs by the General Data Protection Regulation (GDPR), but most likely only with regard to data collection and automated decisions. The legislative landscape appears fragmented, and the application of existing laws to online manipulation is uncertain.
These were some of the topics addressed in the conference.
Artificial Intelligence and Human Rights.
The “Human Rights in the Digital Age” workshop was kickstarted by a keynote by Director Jan Kleijssen (Council of Europe, Information Society and Action against Crime Directorate). Director Kleijssen reflected on the relationship between Artificial Intelligence (AI) and human rights, focusing on the role of the Council of Europe and its forward-looking interventions in regulating technological risks and harms. As a highlight, Director Kleijssen kindly shared with the attendees the news that the Council of Europe is planning to work on an ad hoc convention on AI regulation - very interesting news! The convention work should begin in May 2022 and there will be consultations with several interested parties including, for the first time, companies.
Director Kleijssen pointed out that technologies such as AI can be a force for good. The risks that technology poses to individuals and society, however, are real. The use of manipulative design techniques in the online environment, or in digital products, raises several concerns: the diffusion of online services in our daily lives and the progressive blurring of the online and offline domains via the Internet of Things (IoT) make online manipulation potentially more pervasive than traditional manipulative marketing.
The effects of manipulative design go beyond consumer protection or personal data protection. The important role played by the internet nowadays makes manipulative design potentially harmful also from the perspective of human rights. Individuals carry out their daily actions thanks to the mediation of information technologies, through websites, smart speakers, smartphones, online games, and e-shops: if every single action is influenced through manipulative design, what are the effects on the dignity, autonomy, privacy, and freedom of thought of individuals?
The workshop aimed at exploring this question (and more) by putting multiple disciplines in conversation with each other in panel discussions addressing human rights law, AI, tax law, behavioural sciences, consumer protection, and ethics.
Manipulating through design.
Manipulative design includes a wide range of techniques and strategies that aim at covertly influencing the decision-making processes of the users of a digital product or online service, steering them towards a certain decision or behaviour that favours the manipulator’s interests over the individual. ‘Dark patterns’, for instance, are user interfaces (UI) that combine colours, images, and how these are distributed on a web page, to manipulate users into accepting tracking cookies, clicking onto a banner or advertising. Social networks also make use of manipulative design, both in the form of dark patterns and using algorithms to select and organise the content displayed to users, based on profiles elaborated from personal data. The algorithm selects content that, upon display, hooks users and triggers their engagement, creating the addiction effect that many users experience nowadays. It especially identifies users’ vulnerabilities, to be leveraged for marketing and advertising purposes.
Manipulation is relational in nature, contextual, and strictly connected to power: the manipulator gains power and exercises control over the manipulated. The manipulator puts his or her interests before the interests of the manipulated individual. As explained by Susser, Roessler and Nissenbaum, the manipulated individual becomes a means to an end, such as profit, for instance, but not only. As described by Carolin Ischen, manipulative design represents a shift in communication: information technologies used to be a mere medium for marketing, putting in contact companies with customers. Now, technology has become the interlocutor, and users entertain an exchange with it. Companies leverage manipulation to develop long-term relationships with the users and gain a dominant position in the market.
Manipulative design is not used only by private parties, and not only for business purposes. Manipulation has been deployed as a policy tool – under the name of nudge, coined by C. R. Sunstein and R. H. Thaler in the famous homonymous book– to achieve objectives in line with public interests, such as increasing the number of organs donors, or making streets safer for pedestrians. The websites of many tax authorities are designed to prompt users to fill-in all the necessary information. Users can also voluntarily subject themselves to manipulative practices, for example to stop smoking or exercise more, using smartphone apps and wearable IoTs.
Manipulative design aiming at helping users with a ‘good cause’, and manipulative design aiming at tricking users into doing the bidding of a company are both based on the same conditions and mechanisms. Simply said, manipulation works the same way whether it is for ‘good’ or ‘evil’, whether it empowers users to achieve a healthier lifestyle or renders them subject to the manipulator’s will. Potentially, both forms of manipulation affect users’ autonomy: however, the purpose of some forms of manipulation might make it tolerable or even desirable.
Identifying what constitutive elements of manipulative design are relevant for the law is challenging. Doubts arise, for instance, about whether the law should intervene only when manipulative design gives life to a harm. To further complicate things, manipulative design can act as a gateway to indirect harms. The algorithms used by many social networks catalogue users based on their vulnerabilities. They infer the emotional status of users, how prone they are to complete a purchase from an advertising banner, what information or topic makes the user react and engage more, how certain characteristics affect their preferences and spending, and so on. Subsequently, the UI is designed to maximise such vulnerabilities, steering users into clicking, buying, opening content, visiting a web store. As a result, vulnerable individuals, such as those who are older and also those in a fragile emotional state, or those prone to addiction, might suffer economic consequences (spending, gambling, etc), as well as psychological ones. Even more worryingly, manipulative design can result in discrimination based on race, sexual preferences, age, gender, and medical conditions. Because manipulative design is based on leveraging vulnerabilities and involuntary behaviours, it creates occasions for discrimination. In this sense, manipulative design affects the very values that are the basis of human rights: dignity and autonomy.
The author would like to thank student assistants Jorge Constantino and Jade Baltjes for taking notes during the workshop: their excellent notes were of great use while drafting this piece.
The remainder of this blog will be presented next week in Part 2.
Dr. Silvia De Conca is the co-chair of the Human Rights in the Digital Age working group of the Netherlands Network for Human Rights Research. Silvia is Assistant Professor in Law & Technology at the Transnational Legal Studies department of the Vrije Universiteit Amsterdam, and board member of the Amsterdam Law & Technology Institute at VU (ALTI Amsterdam). Her research interests include law of AI and robotics, manipulation online, privacy & data protection.