Towards Corporate Obligations for Freshwater? The European Commission’s Proposal for a Corporate Sustainability Due Diligence Directive and Freshwater Issues

Towards Corporate Obligations for Freshwater?

The European Commissions Proposal for a Corporate Sustainability Due Diligence Directive and Freshwater Issues

 

Source: https://unsplash.com/photos/rrfdqjJWwmU 
By Candice Foot

 

Freshwater is essential for all life on this planet. Despite this fundamental life sustaining role, the anthropogenic pressures exerted on freshwater resources have increased exponentially, some of the most substantial of which are caused by companies. Companies exacerbate freshwater scarcity due to their volumes of freshwater extraction. Globally, approximately 84% of freshwater resources are withdrawn by the agricultural and industrial sectors. This mass extraction contributes to freshwater scarcity in the basins where companies operate, making freshwater unavailable to meet basic human and environmental needs. Companies are also a major source of freshwater pollution, caused by their discharging harmful agricultural effluents and industrial wastewater contaminated with chemical and radiological substances into surrounding freshwater sources. This deteriorates freshwater quality, causes serious health problems for people and destroys ecosystems. 

In response to companies’ adverse impacts on human rights and the environment, regulatory and governance instruments that aim to prevent and mitigate these have been developed. These include prominent international frameworks like the 2011 United Nations Guiding Principles on Business and Human Rights, and the 2011 OECD Guidelines for Multinational Enterprises, which have introduced a core concept for companies: human rights and environmental due diligence. Due Diligence is an ongoing process that companies should implement to identify, prevent, mitigate and account for how they address potential and actual human rights and environmental impacts in their own operations, their global value chains, and other business relationships.  These instruments have culminated in the most recent rendition, the European Commission’s 2022 Proposal for a Corporate Sustainability Due Diligence Directive, which introduces several obligations for companies, the primary of which is a due diligence obligation.  

Drawing from this important context and the need for companies adverse impacts on freshwater resources to be mitigated, it is crucial to explore how the draft Directive deals with companies’ adverse impacts on freshwater by looking at the material scope of the due diligence obligation. 

The material scope of the due diligence obligation is contained within a two-Part Annex, and is defined by a limited catalogue of human rights norms and environmental standards that originate from specifically selected international instruments. 

Part I of the Annex pertains to human rights included in international human rights instruments and covers human rights in two ways. First, it explicitly lists a limited number of human rights norms. Second, it includes a “catch all” phrase which refers to a list of human rights instruments. One of the human rights explicitly listed is the prohibition of causing any measurable environmental degradation that denies a person access to “safe and clean water.” While this would seem to encompass the human right to water, the Annex formulates this right in a novel way that is a limited construction of the international human right to water’s normative content. The international human right to water’s normative content encompasses three elements: quantity, quality, and accessibility. Quantity requires freshwater supply to be sufficient and continuous for personal and domestic uses, like drinking, cooking and personal and domestic hygiene. Quality entails that it should be clean and free from harmful substances. Accessibility necessitates four elements: physical, economic and informational accessibility, as well as non-discrimination. 

The draft Directive’s human right to water appears to be narrower than the international right. It encompasses the normative content of the right relating to accessibility by use of the word “access”, as well as to quality by the words “safe and clean.” However, there is not explicit reference to quantity, other that potentially “drinking.” If “drinking” is indicative of the quantity of freshwater the right encompasses, then this is a narrow conceptualisation compared to the international right which includes multiple uses in addition to drinking, like cooking, cleaning and hygiene. 

 The right can potentially be included in its full normative content via the “catch all” phrase. The instruments listed there include the International Covenant on Economic, Social and Cultural Rights, which is the instrument that the international right to water was derived from. It also includes other instruments that include the human right to water for specifically protected groups like women, children and persons with disabilities. As the draft Directive envisions broadening the scope of human rights, it is plausible that the narrowly defined explicit human right to water could be expanded to include a broader right. However, it remains unclear if the broader right will be covered by the draft Directive.  

 Part II pertains to the environmental standards, and lists a few standards included in several international environmental conventions or multilateral environmental agreements (MEAs). The material scope of the selected MEAs is wide but arbitrary and ranges from the protection of biological diversity to protecting the environment against certain chemical pollutants. This is an exhaustive list, and is limited to those standards contained within the Annex’s 12 articles. 

The draft Directive’s reliance on MEAs renders it reliant on the fragmental patchwork of MEAs in international environmental law, and results in it missing the issues that the regime has not regulated. Additionally, even though there are over 250 MEAs currently in force, the draft Directive only utilises seven of those which are available.  

While freshwater is not explicitly mentioned in the environmental standard, some of the MEAs can encompass freshwater. For example, the environmental standards in some of the MEAs list chemicals that are known to pollute freshwater. This is evident in the Minamata Convention on Mercury, that encompasses the chemical mercury, and the Stockholm Convention on Persistent Organic Pollutants, that encompasses the chemical diedrin, both of which are known to pollute freshwater. While it appears positive that some freshwater pollution is encompassed by the MEAs, due diligence obligations are limited only to those chemicals contained within the MEAs. Pollution caused by chemicals or substances not contained therein fall outside the material scope of the draft Directive. Considering that the number of chemicals in the world is estimated to exceed 350, 000, this is a very limited material scope.   

Freshwater depletion is not explicitly contained within any of the MEAs, and the only possible way it may be implicitly encompassed is with reference to the Convention on Biological Diversity. The website of the convention makes it clear that freshwater itself is not biodiversity, but biodiversity is the life associated with freshwater, and thus the two cannot be separated. From this construction it may be possible that freshwater depletion can be encompassed within biological resources, however this interpretation is not guaranteed, and it remains to be determined if this implicit interpretation will be adopted. 

While the draft Directive certainly entails some positive steps in implementing legal obligations for companies to include freshwater issues in their due diligence processes, it is insufficient to cover all the adverse impacts that companies have on freshwater from both a human rights and environmental perspective. 

Moving forward, the draft Directive should be amended to more comprehensively account for how companies adversely impact freshwater. This could be done by amending the listing approach to encompass freshwater issues more comprehensively. From the human rights perspective the human right to water can be amended to align with the full normative scope as it exists in international instruments. From the environmental perspective a wider range of MEAs can be included that cover freshwater, however, if this approach is adopted the environmental material scope will always be limited to those specific environmental issues that are regulated by MEAs. Alternatively, the listing approach can be abandoned. This approach aligns with international instruments like the UNGPs and OECD Guidelines that acknowledged companies can adversely impact virtually the full scope of human rights and environmental standards and should thus assess their adverse impacts on the complete spectrum of these rights and standards as contained in international instruments.

 Such amendments are necessary if the draft Directive is to have any meaningful impact on how this life sustaining resource is used by companies.

Bio:

Candice Foot is a PhD researcher at the Erasmus Graduate School of Law in Rotterdam, the Netherlands, where she works in the interdisciplinary research initiative “Public and private interests: A new balance.” Her research is centred on exploring corporate responsibilities to respect freshwater. She is also a member of the Human Rights Here Editorial Board, as well as a member of the Netherlands Network for Human Rights Research Working Group on Business and Human Rights. 

 

Artificial Intelligence & Human Rights: Friend or Foe?

Artificial Intelligence & Human Rights: Friend or Foe?

 

By Alberto Quintavalla and Jeroen Temperman

 

Picture connected (Creative Commons licenses)
Source: https://www.techslang.com/ai-and-human-rights-are-they-related/

 

The problem

Artificial intelligence (‘AI’) applications can have a significant impact on human rights. This impact can be twofold. On the one hand, it may contribute to the advancement of human rights. A striking example is the use of machine learning in healthcare to improve precision medicine so that patients would receive better care. On the other hand, it can pose an obvious risk to the respect of human rights. Unfortunately, there are countless examples. Perhaps the most obvious one is the use of algorithms discriminating against ethnic minorities and women.

The call

It is in this context that international and national institutions are calling for further reflection on the prospective impact of AI. These calls are especially advanced at the European level, including the active involvement of the Council of Europe. Time is in fact ripe to start mapping the risks that AI applications could have on human rights and, subsequently, to develop an effective legal and policy framework in response to these risks.

The event

On 28 October 2021, the hybrid workshop ‘AI & Human Rights: Friend or Foe?’ took place. On this occasion, several researchers from around the world met to discuss the prospective impact of AI on human rights. The event was organized by the Erasmus School of Law, and benefitted from the sponsorship of both the Netherlands Network for Human Rights Research and the Jean Monnet Centre of Excellence on Digital Governance.

Zooming out: the common theme(s)

The workshop consisted of various presentations, each addressing specific instances of the complex interaction between AI and human rights. Nonetheless, the discussion with the audience highlighted two common challenges in dealing with the prospective impact of AI on human rights. Firstly, the recourse to market mechanisms or the use of regulatory instruments aiming at changing individuals’ economic incentives (and, accordingly, behaviour) are not sufficient to address the issues presented by the use of AI. Regulation laying down a comprehensive set of rules applicable to the development and deployment of all AI applications is necessary to fill the existing regulatory gaps and safeguard fundamental rights. This is in line with the EU Commission’s recent proposal setting out harmonized rules for AI, including the need to subject the so-called high-risk AI systems to strict obligations prior to their market entry. Secondly, and relatedly, the development of international measures is not enough to ensure the effective management with local issues and delineate regulation that are responsive to the particular circumstances. Society should regularly look at the context where the emerging issues unfold. The deployment of AI systems is in fact designed to operate in culturally different environments, each one with specific local features. 

Zooming in: the various panels

The remaining part of this blog post provides a short overview of the more specific arguments and considerations presented during the workshop. The workshop consisted of five panels. The first panel revolved around questions of AI and content moderation, biometric technologies, and facial recognition. The discussion emphasized major privacy concerns as well as the chilling effects on free speech and the freedom of association in this area. The second panel, among other issues, continued the content moderation discussion by arguing that the risks of deploying AI-based technologies can be complemented by the human rights potential thereof in terms of combating hateful speech. Moreover, the dynamics between AI and human rights were assessed through the lenses of data analytics, machine learning, and regulatory sandboxes. The third panel aimed to complement the conventional discussions on AI and human rights by focusing on the contextual and institutional dimensions. Specifically, it stressed the relevance of integrating transnational standards into the regulatory environments at lower governance levels since they tend to take more heed of citizens’ preferences, the expanding role of automation in administrative decision-making and the associated risk of not receiving effective remedy, the ever-increasing role of AI-driven applications in business practices and the need for protecting consumers from (e.g.) distortion of their personal autonomy or indirect discrimination, as well as the impact that AI applications can have on workers’ human rights in the workplace. These presentations yielded a broader discussion on the need to ensure a reliable framework of digital governance to protect the vulnerability of human beings as they adopt specific roles (i.e., citizens, consumers, and workers). The fourth panel further expanded the analysis on how AI may expose individuals and groups to other risks when they are in particular situations that have so far been overlooked by current scholarship. Specifically, it discussed the right to freedom of religion or belief, the right to be ignored in public spaces, and the use of AI during the pandemic and its impact on human rights implementation. All of the three presentations stressed that AI surveillance is an important facet that should be targeted by regulatory efforts. Lastly, the fifth panel ventured into a number of specific human rights and legal issues raised as a result of the interplay between AI and the rights of different minority groups such as refugees, LGBTQI and women. The discussion mostly revolved around the serious discriminatory harm that the use of AI applications can result in. References have been made, in particular, to bias in the training data employed by AI systems as well as the underrepresentation of minority groups in the technology sector.

A provisional conclusion

The discussion during the workshop showed that the startling increase in AI applications poses significant threats to several human rights. These threats are however not yet entirely spelled out. Efforts of policymakers and academic research should then be directed to pinpoint what are the specific threats that would emerge as a result of AI deployment. Only then, will it be possible to develop a legal and policy framework that would respond to the posed threats and ensure sufficient protection of fundamental rights. Admittedly, this framework will need to grant some dose of discretion to lower governance levels so that it would be possible to integrate context-specific factors. On a more positive note, the presentations from the workshop emphasized that AI applications can also be employed as a means of protecting fundamental rights.

Bio:

 

Jeroen Temperman is Professor of International Law and Head of the Law & Markets Department at Erasmus School of Law, Erasmus University, Rotterdam, Netherlands. He is also the Editor-in-Chief of Religion & Human Rights and a member of the Organization for Security and Cooperation in Europe’s Panel of Experts on Freedom of Religion or Belief. He has authored, among other books, Religious Hatred and International Law (Cambridge: Cambridge University Press, 2016) and State–Religion Relationships and Human Rights Law (Leiden: Martinus Nijhoff, 2010) and edited Blasphemy and Freedom of Expression (Cambridge: Cambridge University Press, 2017) and The Lautsi Papers (Leiden: Martinus Nijhoff, 2012).

 

 

Alberto Quintavalla is Assistant Professor at the Department of Law & Markets at Erasmus School of Law (Erasmus University Rotterdam) and affiliated researcher at the Jean Monnet Centre of Excellence on Digital Governance. He received his doctoral degree at Erasmus Universiteit in 2020 with research about water governance from the Rotterdam Institute of Law & Economics and the Department of International and European Union Law. He has been a visiting researcher at the Hebrew University of Jerusalem and the European University Institute. His research interests are at the intersection of environmental governance, human rights, and digital technologies. He is admitted to the Italian Bar.

 

Automated Content Moderation, Hate Speech and Human Rights

Automated Content Moderation, Hate Speech and Human Rights 

by Natalie Alkiviadou


Source: mikemacmarketing

Within the framework of a multi-stakeholder, cross-border, EU project entitled SHERPA ‘Shaping the Ethical Dimensions of Smart Information Systems (SIS)’, a project led by the University of De Montfort (UK), a deliverable was developed on 11 specific challenges that SIS (the combination of artificial intelligence and big data analytics) raise with regards to human rights. This blog post seeks to focus on one of those challenges, namely ‘Democracy, Freedom of Thought, Control and Manipulation.’ This challenge considered, amongst others, the impact of SIS on freedom of expression, bias and discrimination. Building on the initial findings of that report, this short piece will examine the use of Artificial Intelligence (AI) in the process of content moderation of online hate speech. This particular challenge was chosen given that the moderation of online hate speech is a hot potato for social media platforms, States and other stakeholders such as the European Union. Recent developments such as the EU’s Digital Services Act and the proposed Artificial Intelligence Act seeking to cultivate new grounds through which online content and the use of AI will be managed.

Online communication occurs on a “massive scale”, rendering it impossible for human moderators to review all content before it is made available. The sheer quantity of online content also makes the job of reviewing, even reported content, a difficult task. As a response, social media platforms are depending, more and more, on AI in the form of automated mechanisms that proactively or reactively tackle problematic content, including hate speech. Technologies handling content such as hate speech is still in its “infancy”. The algorithms developed to achieve this automation are habitually customized for content type, such as pictures, videos, audio and text. The use of AI is not only a response to issues of quantity but also to increasing State pressure on social media platforms to remove hate speech quickly and efficiently. Examples of such pressure include, inter alia, the German NetzDG which requests large social media platforms to remove reported content that is deemed illegal under the German Penal Code and to do so quickly (sometimes within 24 hours) and at risk of heavy fines (up to 50 million Euros in certain cases). To be able to comply with such standards, companies use AI alone or in conjunction with human moderation to remove allegedly hateful content. As noted by Oliva, such circumstances have prompted companies to “act proactively in order to avoid liability…in an attempt to protect their business models”. Gorwa, Binns and Katzenbach highlight that as “government pressure on major technology companies build, both firms and legislators are searching for technical solutions to difficult platform governance puzzles such as hate speech and misinformation”. Further, the “work from home” Covid-19 situation has also led to enhanced reliance on AI accompanied by errors in moderation. In fact, as noted, for example, by YouTube, due to COVID-19, the amount of in-office staff has been reduced meaning that the company temporarily relies on more technology for content review and that this could lead to errors in content removals.

Over-blocking and Freedom of Expression

Relying on AI, even without human supervision can be supported when it comes to content that could never be ethically or legally justifiable, such as child abuse. However, the issue becomes more complicated when it comes to areas which are contested, with little or complicated legal (or ethical) clarification on what should actually be allowed (and what not) – such as hate speech. In the ambit of such speech, Llansó states that the use of these technologies raises “significant questions about the influence of AI on our information environment and, ultimately, on our rights to freedom of expression and access to information”. For example, YouTube wrongly shut down (then reinstated) an independent news agency reporting war crimes in Syria. Several videos were wrongly flagged as inappropriate by an automatic system designed to identify extremist content. Other hash matching technologies such as PhotoDNA also seem to operate in “context blindness” that could be the reason for the removal of the videos on Syria. YouTube subsequently reinstated thousands of the videos which had been wrongly removed.

As highlighted in a Council of Europe report, automated mechanisms directly impact the freedom of expression, which raises concerns vis-à-vis the rule of law and, in particular, notions of legality, legitimacy and proportionality. The Council of Europe noted that the enhanced use of AI for content moderation may result in over-blocking and consequently place the freedom of expression at risk. Beyond that, Gorwa, Binns and Katzenbach argue that the increased use of AI threatens to exacerbate already existing opacity of content moderation, further perplex the issue of justice online and “re-obscure the fundamentally political nature of speech decisions being executed at scale”. Automated mechanisms fundamentally lack the ability to comprehend the nuance and context of language and human communication. The following section provides an example of how automated mechanisms may become inherently biased and further lead to concerns relating to respect for the right to non-discrimination.

The Issue of Bias and Non-Discrimination

AI can be infiltrated with biases at the stage of design or enforcement. In its report ‘Mixed Messages: The Limits of Automated Social Content Analysis’, the Centre for Democracy and Technology revealed that automated mechanisms may disproportionately impact the speech of marginalized groups. Although technologies such as natural language processing and sentiment analysis have been developed to detect harmful text without having to rely on specific words/phrases, research has shown that they are “still far from being able to grasp context or to detect the intent or motivation of the speaker”. Such technologies are just not cut out to pick up on the language used, for example, by the LGBTQ community whose “mock impoliteness” and use of terms such as “dyke,” “fag” and “tranny” occurs as a form of reclamation of power and a means to prepare members of this community to  empower themselves to deal with hatred and discrimination.

Through the conscious or unconscious biases that mark the automated mechanisms moderating content as depicted in the above examples, the use of AI for online hate speech therefore leads not only to an infringement of freedom of expression due to over-blocking and silencing dissenting voices as demonstrated above but, also, shrinks the space for minority groups such as the LGBTQ community. This shrinking space that results from inherent bias therefore leads to a violation of the fundamental doctrine of international human rights law, namely that of non-discrimination.

Concluding Comments

As noted by Llansó, the above issues cannot be tackled with more sophisticated AI. Tackling hate speech by relying on AI without human oversight (to say the least) and doing it proactively and not only reactively, places the freedom of expression in a fragile position. At the same time, the inability of technologies to pick up on the nuances of human communication in addition to the biases that have affected the make-up and functioning of such technologies brings up issues pertaining to the doctrine of non-discrimination.

Bio:

Natalie Alkiviadou is a Senior Research Fellow at Justitia, Denmark.

The rights of dead persons and the right to water in India on the occasion of COVID-19

The rights of dead persons and the right to water in India on the occasion of COVID-19

by Nabil Iqbal and  Mohd Altmash

 

Source: Gettyimages


Amid the spike of COVID-19 cases in India during the second wave of the pandemic, various
Indian media (see f.e. The Hindu and Indian Express) reported the visuals of uncounted human dead bodies floating in the river in the state of Uttar Pradesh and Bihar. These reports received worldwide coverage and the India’s government was criticized for failing to dispose of bodies respectfully. On 14 May 2021 the National Human Right Commission of India issued a notice to  Central and State Governments advocating for the rights of the deceased and directed them to prepare a standard operating procedure for the proper burial of COVID-related deceased in order to maintain their dignity.

Further, a petition was filed before the Supreme Court of India (SC) on June 2, 2021 alleging that the ongoing situation amounts to the violation of human rights that will be summarized in the lines that follow.

Rights of Dead Persons

There is no specific legislative framework in India that protects the rights of people who have died. However, several judicial pronouncements of the SC and the High Courts (HC) have recognized the rights of the deceased and have included them within the purview of Article 21 of the Indian Constitution that manifold the horizons of right to life. These rights include the right to die with dignity and the right to have a decent burial.

Right to die with Dignity - The most representative case on the right to die with dignity is Pramanand Katara v Union of India (U.O.I.), where the SC had explicitly held that the right to life and dignity extends not only to living persons but also to their dead bodies. Further, through judicial activism, the Madras HC opined that “the right to life enshrined in Article 21 cannot be restricted to mere animal existence. It means something much more than just physical survival”. Interpreting this view, the Calcutta HC in Vineeta Ruia v The Principal Secretary, West Bengal, held that the right to dignity guaranteed under Article 21 is not limited to living persons but also to their remains after death.

Right to have a decent burial - The question regarding the right to a decent burial was raised in Vikash Chandra v. U.O.I. In this case, the Patna HC held that it is the responsibility of the government to provide decent burials in compliance with the law and in respect for human dignity. Later, the SC in Ashray Adhikar Abhiyan v. U.O.I. recognized the right of decent burial as a fundamental right within the right to life.

At the international level, the rights of dead persons are not explicitly enshrined in International Human Rights (IHR) laws but there are certain provisions that indirectly recognize their rights. These provisions include - a) the United Nations Commission on Human Rights resolution of 2005 that emphasized the significance of management of human remains in a dignified way, along with their disposal respecting the needs of the families; b) the Universal Declaration on Bioethics and Human Rights, which mentions that special measures should be taken regarding the rights and interest for those who are incapable of exercising their autonomy.; c) the UN’s Inter Agency Standing Committee’s Operational Guidelines on Human Rights and Natural Disasters recommend that appropriate measures should be taken ‘to facilitate the return of remains to the next of kin. Measures should allow for the possibility of recovery of human remains for future identification and reburial if required’; d) Article 3 (a) of the 1990 Cairo declaration on Human Rights in Islam provides “in the event of the use of force and in case of armed conflict- it is prohibited to mutilate dead bodies.”

The Geneva Convention of 1949 of International Humanitarian Law (IHL) explicitly recognizes the rights of dead soldiers under Article 16. Even the World Health Organization (WHO) has issued detailed guidelines and protocols for the proper management of the corpses in a dignified manner.

In contrast to IHL, international human rights law does not contain any express references to the treatment of dead bodies including the rights of the dead and obligations of states. However, in the vast majority of states including India, the rights of  dead persons and offence against dead person have been incorporated under domestic legislation.

Right to clean water

Apart from the violation of rights of dead persons, the dumping of dead bodies in the river amounts to a violation of the right to clean water, which has been recognized both by municipal and international law. The right to clean water and a healthy environment is recognized and guaranteed under Article 21 (Right to life) and Articles 48 & 51A (g) (Protection of environment) of the Indian Constitution by liberal interpretation of the Indian Judiciary. In the landmark case of MC Mehta v U.O.I., the SC has explicitly ruled that the right to clean water and healthy environment is a fundamental right under Article 21 of the Indian Constitution. The SC has reiterated the same in various subsequent judgments (Narmada Bachao Andolan v U.O.I; Bandhua Mukti Morcha v U.O.I.; Subhash Kumar v State of Bihar).

Furthermore, the Water Prevention and Control of Pollution Act(1974) and the Environment (Protection) Act (1986) are significant legislations in India that outline measures for clean water and healthy environment.

In international law, the right to water has been recognized by  Resolution 64/292 of the United Nations General Assembly, acknowledging the right to clean water as essential for the realization of other human rights.  Similarly, the Human Rights Council in the UNGA Resolution 70/169, approved resolution 15/9 in which the Council stated that the human right to safe drinking water is stemmed from the right to an adequate standard of living. This right is also connected to several other rights namely the right to life, right to highest attainable standard of physical and mental health.

The 2030 Agenda for Sustainable Development, adopted by the United Nations General Assembly in September 2015, comprehends 17 Sustainable Development Goals (SDGs). The 2030 Agenda addresses specific reference to human rights, equality, and non-discrimination principles. The SDGs are universal and goal-oriented in nature. Further, they apply to all countries and all peoples around world. The SDG framework contains a dedicated goal (SDG 6) for water and sanitation: “ensure availability and sustainable management of water and sanitation for all. Further, The Human Rights Council in 2018 prompted development partners to take up an approach which relies solely on human rights. As such, it would be useful while designing, implementing, and monitoring programmes backing national activities associated to rights to water and sanitation.

In addition, these rights have also been recognized in the Universal Declaration of Human Rights (Article 25), the Convention on the Elimination of All Forms of Discrimination against Woman (Article 14(2)(h)), and the Convention on the Rights of the Child (Article 24(2)).

India has implemented the provisions regarding the right to water in the strict sense. As discussed earlier, the right to water was not initially recognized by the legislatures or the constitution, but the Indian judiciary has declared it as a fundamental right that cannot be compromised at any reason.

Conclusion

In spite of various judicial pronouncements and legal frameworks, these rights are being violated in India. The above-mentioned incident is one of the examples of such violations. Such a negligent act could have serious consequence in the future. Therefore, the government should strengthen laws that could protect the rights of human beings (including the dead). At the same time, it is also necessary to pay attention to deteriorating environmental conditions, especially the ecology of rivers, that is being affected due to the such negligence of individuals or agency of the state.

Bios

Nabil Iqbal is a final year undergraduate law student from Jamia Millia Islamia, New Delhi, India. He has a strong interest in International Human Rights, Environmental and Humanitarian Law.

Mohd Altmash is an undergraduate student of B.A.LLB. from Jamia Millia Islamia, New Delhi, India. His areas of interest include International Law along with Human Rights and Constitutional Law.

Doctoral Research Forum Blog Series: Part VIII

Eclipsing Human Rights: Why the International Regulation of Military AI is not Limited to International Humanitarian Law

By Taylor Woodcock

Source: Freepik

Much has been written on the transformative potential of artificial intelligence (AI) for society. The surge in recent technological advancements that seek to leverage the benefits of AI and machine learning techniques have raised a host of questions about the adverse impacts of AI on human rights. Yet, when it comes to the debate on military applications of AI, the framework of international human rights law (IHRL) tends to receive rather cursory treatment. Greater examination of the relevance of IHRL is therefore necessary in order to more comprehensively address the legality of the development, acquisition and use of AI-enabled military technologies under international law.

AI and human rights

A number of concerns about the potential of AI technologies to interfere with human rights have been raised in recent years. Problems relating to the opacity and lack of transparency and predictability of AI systems, biases in training data and resulting output generated, risks of discrimination and breaches of privacy, adverse effects on human dignity, and the difficulty of identifying who to hold responsible for these harms have all been highlighted regarding the use of AI in a number of different domains. Amongst these are the use of AI for detecting welfare fraud, as tools in the criminal justice system or for policing, in the management of borders and migration and in the use of facial recognition and surveillance technologies, to name but a few. This has led to calls for the use of IHRL as a broad overarching framework for the governance of AI, ensuring respect for rights at all stages in the development and use of these technologies. Reliance on such a framework will have the benefit of robust human rights enforcement mechanisms, as well as the availability of well-developed best practices in areas such as human rights impact assessments and due diligence. Yet, whilst these issues may equally hold relevance for the use of AI in the military domain, at present this appears to be an underexplored issue.  

IHL eclipsing debates on military AI

It is commonly recognised in debates on military AI that the legality of these technologies engages a number of bodies of international law, IHRL amongst them. Nevertheless, in these debates recourse is typically made to international humanitarian law (IHL) as the primary regime regulating military applications of AI, with a few exceptions. Of course, in this context IHL remains crucial and reliance on this body of law makes sense given the intrinsic connection between military technologies and the laws governing the means and methods of warfare. Additionally, the forum in which political debates on autonomous weapons take place occur under the auspices of the Convention on Certain Conventional Weapons, which forms part of the corpus of IHL treaties. However, the application of IHL to military AI does not eclipse the relevance of IHRL in this context. Debates about the interplay of IHL and IHRL have persisted in recent decades, yet regardless of the theoretical approach adopted, it is now generally accepted that IHRL continues to apply during armed conflict. Rather than assuming that human rights protections will be displaced by IHL, it is vital to more closely examine the implications IHRL holds on a norm-by-norm basis for the development and use of military AI.

Human rights and military AI

There are a number of circumstances in which the use of AI-enabled military technologies engage IHRL, including when States conduct surveillance, engage in counter-terrorism and other security operations, employ anticipatory military strategies or operate in the margins of existing armed conflicts. The complex nature of contemporary conflicts highlights the need for States to account for both IHL and IHRL when military applications of AI are deployed, depending on how the circumstances on the ground inform the applicable paradigm. In one of the most extensive assessments to date of how autonomous weapons may interfere with human rights, Brehm frequently uses sentry systems as an illustrative example of when States may be bound by human rights standards on the use of force during armed conflict. Less often addressed in debates is the question of what role IHRL plays when applied alongside IHL during active hostilities.   

During the conduct of hostilities IHRL obligations will often be interpreted in light of IHL. As put by the International Court of Justice in its Advisory Opinion of 1996 on the Legality of the Threat or Use of Nuclear Weapons, the right to life continues to apply during hostilities and what constitutes an ‘arbitrary’ deprivation of life will be determined with reference to the IHL rules on targeting. It therefore follows that a violation of the IHL targeting rules will also constitute an interference with human rights. More generally, the use of AI on the battlefield may impact a number of human rights protections, including but not limited to the right to life, the right to liberty, the prohibition on torture or cruel, inhuman or degrading treatment, the right to privacy, the right to respect for property and the prohibition on discrimination. Nevertheless, derogations, the limitations of extraterritorial jurisdiction and the interpretation of IHRL norms in light of prevailing IHL standards all raise questions about the additional practical significance of IHRL in the active hostilities context. However, it is arguably the case that the key relevance of IHRL here rests on the procedural obligations, such as the duty to investigate, that will be triggered as a result of the violation of IHL and IHRL.

The duty to investigate

As a threshold issue, the application of IHRL during an armed conflict occurring outside of a State’s territory depends upon the establishment of extraterritorial jurisdiction. Under the International Covenant on Civil and Political Rights, this may be relatively straightforward where States exert control over individuals’ rights. Whilst a more restrictive approach to extraterritorial jurisdiction has been adopted by the European Court of Human Rights, for the obligation to conduct investigations specifically, recent case law suggests that special features of a case may support the finding of a jurisdictional link, even if the State’s extraterritorial jurisdiction cannot be established for the substantive violation alleged.

Whilst the duty to investigate exists under both IHL and IHRL, the latter nevertheless provides a significantly more detailed set of standards on conducting effective investigations. Though these standards likely require adaptation in the context of armed conflict, this does not obviate the need for States to conduct effective investigations capable of identifying whether or not the conduct causing an alleged violation was justified. This raises the question of whether reliance on AI technologies, renowned for a lack of transparency and predictability, will impede the ability of States to conduct effective investigations. For instance, in order to assess the reasonableness of a commander’s decision to launch a particular attack in the course of an investigation, it is necessary to understand the basis on which that decision was made. The integration of inherently opaque AI-enabled technologies into military arsenals – for instance in target recognition software – complicates this picture, as there is a lack of transparency around which factors influence the algorithm’s output. As such, States must consider whether the technical specificities and design of AI technologies acquired by militaries are sufficient to meet standards set by international law, including the duty to conduct investigations. 

The development and acquisition of military AI

It is often repeated in international discussions on military AI that respect for international law needs to be ensured throughout the entire ‘life cycle’ of a system. Whilst there is a tendency in debates to limit consideration of the pre-deployment stage to the duty to conduct weapons reviews under Article 36 of the First Protocol Additional to the Geneva Conventions, IHRL may also hold relevance for understanding the duties on States that develop and acquire these technologies. The emergence of the business and human rights framework may be instructive for understanding the obligations on States to regulate corporate conduct to prevent abuses under the pre-existing duty to protect human rights. Debates on military AI should further consider IHRL to determine what is specifically required of States in regulating the corporations that play a key role in driving forward technological developments in military AI.

Conclusion

Though the applicability of IHRL to military AI is often accepted, meaningful discussion on its implications have been eclipsed by reliance on IHL, which only partially accounts for the applicable international legal framework that regulates AI in the military domain. With respect to the primary international law obligations on States that seek to develop, acquire and use AI-enabled military technologies, human rights also have a role to play. The duties on States acquiring and deploying military AI to investigate and to regulate corporate behaviour are only two examples that highlight the implications of IHRL in this context. This demonstrates the need for more rigorous engagement with human rights alongside IHL in order to determine how these technologies may be developed and used in accordance with international law.

Bio: These issues and more will be taken up in further research by Taylor Woodcock, a PhD Researcher in public international law at the Asser Institute. Taylor conducts research in the context of the DILEMA Project on Designing International Law and Ethics into Military Artificial Intelligence, which is funded by the Dutch Research Council (NWO) Platform for Responsible Innovation (NWO-MVI). Her work relates to applications of AI in the military domain, reflecting on the implications of these emergent technologies for the fulfilment of obligations flowing from international humanitarian law and international human rights law.

Bio:

 

Taylor Woodcock is a PhD Candidate at the Asser Institute. Her research relates to military applications of artificial intelligence and the obligations that arise with respect to these technologies under international humanitarian law and international human rights law.