Human Rights Here blog NNHRR Logo    Asser Logo

Addressing Intersectional Discrimination Issues brought about by New Technologies. What Role for EU Data Protection Law?

Credits: https://www.pexels.com/photo/pink-white-black-purple-blue-textile-web-scripts-97077/
 

By Alessandra Calvi

New realities brought about by technological developments are increasingly putting people at risk of being harmed. For instance, in 2015, an Amazon recruitment tool was found to rate candidates for technical roles in a manner discriminatory to women. In 2016, in the United States, a software supporting judges with parole decisions was discovered to be more likely to flag African-American than white inmates as being at risk of recidivism. These are just a few examples of many cases of discrimination performed by automated systems, highlighting increasing interrelationships between the right to not being discriminated and data protection rights.

Non-discrimination right is enshrined in a number of international human rights law instruments (e.g., United Nations human rights treaties such as the International Covenant on Civil and Political Rights or the European Convention on Human Rights under the Council of Europe (CoE)). The right to personal data protection conversely lacks express recognition therein (with some exceptions, see e.g., the (modernised) Convention 108) as it emerged quite recently, in response to the spread of Information and Communication Technologies (ICTs) (European Union Agency For Fundamental Rights et al., 2018). As a consequence, international human rights courts and bodies called to decide data protection-related matters derived such right from the interpretation of the right to privacy (European Union Agency For Fundamental Rights et al., 2018). 

By contrast, both non-discrimination and data protection rights are expressly considered fundamental in the European Union (EU) and enshrined in primary law, namely the Treaties and the Charter of Fundamental Rights of the European Union (CFR). The CFR ensures the protection of personal data (Article 8 CFR) and forbids discrimination (Article 21 CFR). Similarly, the Treaty on the Functioning of the European Union (TFEU) refers to the EU aim to combat discrimination in defining and implementing its policies and activities (Article 10 TFEU; see also Article 19 TFEU on legislative procedure; and Article 16 TFEU on the right to personal data protection). 

A type of discrimination that is increasingly receiving attention by EU institutions and agencies (see e.g., this EIGE report or this EU Parliament resolution) and international organisations dealing with human rights (see e.g., this guide developed by UN Women or this is CoE blogpost) is intersectional discrimination. Intersectional discrimination occurs when a person is treated less favourably due to different protected grounds (e.g., gender, race, religion) that, inseparably and simultaneously, operate and interact with each other, leading to this specific form of discrimination (European Union Agency for Fundamental Rights et al., 2018). It was first conceptualised in the United States to describe how African-American women experience discrimination (Crenshaw, 1989). Intersectional discrimination therefore differs from multiple discrimination, which happens when multiple grounds co-exist separately (European Union Agency for Fundamental Rights et al., 2018). 

To differentiate between the two, consider the following examples. A female employee using a wheelchair could be a victim of multiple discriminations when her employer fails to ensure office accessibility or prohibit misogynist slurs. In this case, the lack of accessibility affects all wheelchair users regardless of their gender, whilst the misogynist slurs all female employees regardless of their abilities. By contrast, rules banning headscarves may determine intersectional discrimination against Muslim women, not being possible to separate gender and religion for configuring this type of discriminatory situation (although the topic is admittedly controversial as, in some cases, courts in Europe have upheld to these rules). Indeed, neither non-Muslim women nor Muslim men would be affected by them (see e.g., Center for Intersectional Justice). 

Such differences are not just theoretical: they affect the justiciability of intersectional discrimination claims. Despite some advances, it was indeed noted how remedies under international human rights law tend to rely on a single-axis approach to discrimination (i.e., consider one protected ground at a time) and thus fail to properly address intersectionality matters (Truscan & Bourke-Martignoni, 2016). This is also true at the EU level, where the fragmented EU anti-discrimination legal framework remains unsuited to duly protect victims. In C-443/15 Parris v Trinity College and others, the Court of Justice of the EU admitted that discrimination may be based on several grounds (e.g., sexual orientation, age). Yet, to trigger protection, discrimination needs to exist in relation to each ground individually considered. Whereas this is the case for multiple discrimination, by definition, intersectional discrimination assumes the inseparability of protected grounds (Crenshaw, 1989; European Union Agency for Fundamental Rights et al., 2018). This means that, currently, this type of discrimination suffers an enforcement gap both under international human rights law and EU anti-discrimination law. 

The problem of intersectional discrimination is in turn exacerbated by technology development. In the last years, cases of intersectional discrimination performed by automated systems have multiplied across domains. Studies showed failures of facial recognition on female black and brown faces (see e.g., Gender Shades). Then, it was demonstrated how the Dutch government, to investigate welfare-related frauds, relied for years on a risk classification algorithm that disproportionately flagged persons with a low income and a migration background as potential fraudsters (Amnesty International, 2021). 

Against this backdrop, it is worthy to investigate whether data protection law may be an option to fill the gaps in anti-discrimination law to address intersectional discrimination. Specifically, this contribution evaluates to what extent the General Data Protection Regulation (GDPR) could lead the way toward greater protection of victims of intersectional discrimination performed by automated systems. Despite being an EU law instrument and not an international human rights treaty, the GDPR has an extraterritorial ambition (see e.g., Article 3 GDPR) and aims to become a global standard in terms of data protection (De Hert & Czerniawski, 2016). Whereas such an attitude has been criticised for being power- instead of value-driven, the Regulation still received international attention and arguably contributed to improving data protection practices at a global level (Gstrein & Zwitter, 2021).   

Although references to intersectional discrimination are lacking in the GDPR, Recitals 71 and 75 GDPR recognise discrimination among the possible risks for the rights and freedoms of data subjects (i.e., the natural persons to whom the information refers) arising from personal data processing. Considering that the GDPR is concerned with the protection of fundamental rights, in particular – but not only – personal data protection (Article 1(2) GDPR), it could still play a role in addressing intersectional discrimination, especially thanks to the risk-based approach that characterises it and to data subjects rights. 

Under the risk-based approach, the riskier the processing (depending e.g., on the types of information processed, on the scale of the processing, etc. (Article 29 Data Protection Working Party, 2017), the more safeguards a controller (i.e., the entity determining purposes and means of the processing) needs to implement (Gellert, 2016). Among these extra safeguards is the obligation to tailor down technical and organisational measures to protect personal data (such as data protection by design and by default measures under Article 25 GDPR) and to perform data protection impact assessments (DPIAs) in case of high-risk personal data processing (Article 35 GDPR) (Gellert, 2016). 

DPIAs are processes used to analyse ex ante the impacts of high-risk personal data processing (Kloza et al., 2021). They shall contain a systematic description of the processing operations and their purposes, their necessity and proportionality assessment, a risk assessment to the rights and freedoms of data subjects and the measures to address such risks (Article 35(7) GDPR). Whereas DPIAs have often been operationalised in such a way as to address exclusively data security risks, neglecting broader societal impacts of data processing (Koops, 2014; Mantelero, 2019), such an attitude is increasingly questioned by scholars and regulators. Many are advocating for broader DPIAs entailing a thorough analysis of the consequences of data processing on a wide range of fundamental rights, such as non-discrimination (Autoriteit Persoongegevens, 2021; Kaminski & Malgieri, 2021). Likewise, through DPIAs, controllers could engage in a preliminary analysis of the risks related to intersectional discrimination brought about by data processing and therefore prevent harm. 

Then, the flexibility granted by the risk-based approach would enable overcoming the limitations of both the EU anti-discrimination and data protection secondary law frameworks, depending on the letter of the law. On the one hand, anti-discrimination directives protect only certain grounds, individually considered, and in certain situations (Fredman, 2016). For instance, the protection on the grounds of race, ethnicity and sex covers only access to employment, welfare systems (specifically, the more limited social security in case of sex) and goods and services, whereas sexual orientation, disability, religion or belief and age are protected only in the context of employment (European Union Agency for Fundamental Rights et al., 2018).

On the other, the GDPR grants enhanced protection only to special categories of data listed in Article 9 GDPR (which include racial or ethnic origin, genetic data, data concerning health or data concerning a natural person’s sex life or sexual orientation, etc.), whose use in the past led to human rights abuses and/or individual harm (Georgieva & Kuner, 2020). Processing these data is forbidden unless one of the exceptions foreseen in Article 9(2) GDPR applies (e.g., the data subject has given explicit consent, the data are manifestly made public, the processing is necessary for reasons of substantial public interest, etc.). 

Due to the openness and context dependence of the notion of risk, new situations worthy of protection beyond those expressly identified in the law (such as those generated by the combination of protected grounds or arising from the processing of data that are not formally classified as special categories) could be identified and action to prevent harm taken (through e.g., DPIAs or other technical and organisational measures).

Finally, through data subjects’ rights such as access (Article 15 GDPR), transparency (Article 12 GDPR), or not being subject solely to automated decision-making (Article 22 GDPR), certain otherwise invisibilised algorithmic discrimination cases may emerge. Decisions by Data Protection Authorities demonstrate how students and gig workers have already relied on Article 22 GDPR to complain about automated discrimination (Barros Vale & Zanfir-Fortuna, 2022). Whilst so far the intersectional profile thereof remains underexplored, these articles remain nevertheless promising to promote the justiciability of claims advanced by people discriminated against by automated systems.   

Yet, some caveats remain. Framing the processing of special categories of data as always exceptional is misleading. For instance, from a computing perspective, collecting this information when building a model is still necessary to ensure the quality of a dataset and prevent bias in automated systems, occurring when certain individuals or groups of individuals are systematically and unfairly discriminated against in favour of others (Friedman & Nissenbaum, 1996; Žliobaitė & Custers, 2016). Special categories of data are also essential to draw statistics on diversity and evaluate the effectiveness of the positive actions (if any) undertaken to promote substantial equality (Laudati, 2016). 

To avoid incurring extra administrative burdens, controllers may refrain from collecting special categories of data unless legally obliged to do so. However, legal obligations mandating sensitive data processing would undermine data subjects’ autonomy. Data subjects may lack trust in controllers holding their sensitive information, as controllers remain in the position of discriminating against them. 

Meanwhile, legal obligations mandating the processing of special categories of data are relatively scarce and scattered (Timo Makkonen, 2016). At the EU level, Article 10(5) AI Regulation proposal appears to hold promise for the configuration of the substantial public interest exception under Article 9(2)(g) GDPR. Such exception allows controllers to process special categories of data when necessary for reasons of substantial public interest, on the basis of Union or Member State law which shall be proportionate to the aim pursued, respect the essence of the right to data protection and provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject. 

Nevertheless, some criticalities remain. For instance, Article 10(5) AI Regulation proposal allows providers (namely, the entities that develop an AI system or place it on the market in their own name ex Article 3 AI Regulation proposal) to process special categories of data to the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems. However, whereas a controller does not coincide with a provider, it is uncertain whether the exception based upon Article 9(2)(g) GDPR read in conjunction with Article 10(5) AIR could be applicable. 

Furthermore, due to current technical constraints, the benefits of this article for addressing intersectional discrimination remain rather theoretical. Despite intersectional approaches to technical fairness being increasingly investigated, at present, they are not suitable to properly address intersectionality concerns. So far, intersectionality has been operationalised by collapsing the membership to different subgroups into a unique attribute, an oversimplification against the rationale of this concept. Then, current fairness metrics enable comparison of only two groups at a time, making intersectional analysis extremely difficult (Balayn & Gürses, 2021).

Lastly, to demand the enforcement of their rights, data subjects are required to be materially able to e.g., decipher data protection notices, be aware of the existence of data subjects’ rights and scope thereof and be persistent in case of non-compliance (González Fuster et al., 2022).

In sum, some GDPR provisions could be functional to address the challenges brought about by intersectional discrimination performed by automated systems, but with some caveats. As the two fields of data protection and non-discrimination are in constant evolution and the legislative panorama is changing, the analysis is to be continued.

This blog post summarises some of the main findings of Alessandra Calvi's paper, “Exploring the Synergies between Non-Discrimination and Data Protection: What Role for EU Data Protection Law to Address Intersectional Discrimination ?”, European Journal of Law and Technology, vol. 14, no. 2, pp. 1–34, 2023 .The full paper and a complete list of references are available here

Bio:

Alessandra Calvi is a PhD candidate in the Law, Science, Technology and Society (LSTS) research group and in the d.pia.lab at the Vrije Universiteit Brussel (VUB) since August 2019 and in the Equipes Traitement de l'Information et Systèmes (ETIS) lab at CY Cergy Paris Université (CYU) since September 2021. In October 2021, she was entrusted with a research mandate on the EUTOPIA-funded interdisciplinary project 'Enhancing the inclusiveness of smart cities: reinterpreting Data Protection Impact Assessment (DPIA) under the General Data Protection Regulation (GDPR) through intersectional gender lenses' between VUB and CYU, under the joint supervision of the legal scholar Prof. Paul De Hert and the computer scientist Prof. Dimitris Kotzinos.

Add comment