Human Rights Here blog NNHRR Logo    Asser Logo

Doctoral Research Forum Blog Series: Part VIII

Eclipsing Human Rights: Why the International Regulation of Military AI is not Limited to International Humanitarian Law

By Taylor Woodcock

Source: Freepik

Much has been written on the transformative potential of artificial intelligence (AI) for society. The surge in recent technological advancements that seek to leverage the benefits of AI and machine learning techniques have raised a host of questions about the adverse impacts of AI on human rights. Yet, when it comes to the debate on military applications of AI, the framework of international human rights law (IHRL) tends to receive rather cursory treatment. Greater examination of the relevance of IHRL is therefore necessary in order to more comprehensively address the legality of the development, acquisition and use of AI-enabled military technologies under international law. 

AI and human rights

A number of concerns about the potential of AI technologies to interfere with human rights have been raised in recent years. Problems relating to the opacity and lack of transparency and predictability of AI systems, biases in training data and resulting output generated, risks of discrimination and breaches of privacy, adverse effects on human dignity, and the difficulty of identifying who to hold responsible for these harms have all been highlighted regarding the use of AI in a number of different domains. Amongst these are the use of AI for detecting welfare fraud, as tools in the criminal justice system or for policing, in the management of borders and migration and in the use of facial recognition and surveillance technologies, to name but a few. This has led to calls for the use of IHRL as a broad overarching framework for the governance of AI, ensuring respect for rights at all stages in the development and use of these technologies. Reliance on such a framework will have the benefit of robust human rights enforcement mechanisms, as well as the availability of well-developed best practices in areas such as human rights impact assessments and due diligence. Yet, whilst these issues may equally hold relevance for the use of AI in the military domain, at present this appears to be an underexplored issue.  

IHL eclipsing debates on military AI

It is commonly recognised in debates on military AI that the legality of these technologies engages a number of bodies of international law, IHRL amongst them. Nevertheless, in these debates recourse is typically made to international humanitarian law (IHL) as the primary regime regulating military applications of AI, with a few exceptions. Of course, in this context IHL remains crucial and reliance on this body of law makes sense given the intrinsic connection between military technologies and the laws governing the means and methods of warfare. Additionally, the forum in which political debates on autonomous weapons take place occur under the auspices of the Convention on Certain Conventional Weapons, which forms part of the corpus of IHL treaties. However, the application of IHL to military AI does not eclipse the relevance of IHRL in this context. Debates about the interplay of IHL and IHRL have persisted in recent decades, yet regardless of the theoretical approach adopted, it is now generally accepted that IHRL continues to apply during armed conflict. Rather than assuming that human rights protections will be displaced by IHL, it is vital to more closely examine the implications IHRL holds on a norm-by-norm basis for the development and use of military AI.

Human rights and military AI

There are a number of circumstances in which the use of AI-enabled military technologies engage IHRL, including when States conduct surveillance, engage in counter-terrorism and other security operations, employ anticipatory military strategies or operate in the margins of existing armed conflicts. The complex nature of contemporary conflicts highlights the need for States to account for both IHL and IHRL when military applications of AI are deployed, depending on how the circumstances on the ground inform the applicable paradigm. In one of the most extensive assessments to date of how autonomous weapons may interfere with human rights, Brehm frequently uses sentry systems as an illustrative example of when States may be bound by human rights standards on the use of force during armed conflict. Less often addressed in debates is the question of what role IHRL plays when applied alongside IHL during active hostilities.   

During the conduct of hostilities IHRL obligations will often be interpreted in light of IHL. As put by the International Court of Justice in its Advisory Opinion of 1996 on the Legality of the Threat or Use of Nuclear Weapons, the right to life continues to apply during hostilities and what constitutes an ‘arbitrary’ deprivation of life will be determined with reference to the IHL rules on targeting. It therefore follows that a violation of the IHL targeting rules will also constitute an interference with human rights. More generally, the use of AI on the battlefield may impact a number of human rights protections, including but not limited to the right to life, the right to liberty, the prohibition on torture or cruel, inhuman or degrading treatment, the right to privacy, the right to respect for property and the prohibition on discrimination. Nevertheless, derogations, the limitations of extraterritorial jurisdiction and the interpretation of IHRL norms in light of prevailing IHL standards all raise questions about the additional practical significance of IHRL in the active hostilities context. However, it is arguably the case that the key relevance of IHRL here rests on the procedural obligations, such as the duty to investigate, that will be triggered as a result of the violation of IHL and IHRL.

The duty to investigate

As a threshold issue, the application of IHRL during an armed conflict occurring outside of a State’s territory depends upon the establishment of extraterritorial jurisdiction. Under the International Covenant on Civil and Political Rights, this may be relatively straightforward where States exert control over individuals’ rights. Whilst a more restrictive approach to extraterritorial jurisdiction has been adopted by the European Court of Human Rights, for the obligation to conduct investigations specifically, recent case law suggests that special features of a case may support the finding of a jurisdictional link, even if the State’s extraterritorial jurisdiction cannot be established for the substantive violation alleged.

Whilst the duty to investigate exists under both IHL and IHRL, the latter nevertheless provides a significantly more detailed set of standards on conducting effective investigations. Though these standards likely require adaptation in the context of armed conflict, this does not obviate the need for States to conduct effective investigations capable of identifying whether or not the conduct causing an alleged violation was justified. This raises the question of whether reliance on AI technologies, renowned for a lack of transparency and predictability, will impede the ability of States to conduct effective investigations. For instance, in order to assess the reasonableness of a commander’s decision to launch a particular attack in the course of an investigation, it is necessary to understand the basis on which that decision was made. The integration of inherently opaque AI-enabled technologies into military arsenals – for instance in target recognition software – complicates this picture, as there is a lack of transparency around which factors influence the algorithm’s output. As such, States must consider whether the technical specificities and design of AI technologies acquired by militaries are sufficient to meet standards set by international law, including the duty to conduct investigations. 

The development and acquisition of military AI

It is often repeated in international discussions on military AI that respect for international law needs to be ensured throughout the entire ‘life cycle’ of a system. Whilst there is a tendency in debates to limit consideration of the pre-deployment stage to the duty to conduct weapons reviews under Article 36 of the First Protocol Additional to the Geneva Conventions, IHRL may also hold relevance for understanding the duties on States that develop and acquire these technologies. The emergence of the business and human rights framework may be instructive for understanding the obligations on States to regulate corporate conduct to prevent abuses under the pre-existing duty to protect human rights. Debates on military AI should further consider IHRL to determine what is specifically required of States in regulating the corporations that play a key role in driving forward technological developments in military AI.

Conclusion

Though the applicability of IHRL to military AI is often accepted, meaningful discussion on its implications have been eclipsed by reliance on IHL, which only partially accounts for the applicable international legal framework that regulates AI in the military domain. With respect to the primary international law obligations on States that seek to develop, acquire and use AI-enabled military technologies, human rights also have a role to play. The duties on States acquiring and deploying military AI to investigate and to regulate corporate behaviour are only two examples that highlight the implications of IHRL in this context. This demonstrates the need for more rigorous engagement with human rights alongside IHL in order to determine how these technologies may be developed and used in accordance with international law.

Bio: These issues and more will be taken up in further research by Taylor Woodcock, a PhD Researcher in public international law at the Asser Institute. Taylor conducts research in the context of the DILEMA Project on Designing International Law and Ethics into Military Artificial Intelligence, which is funded by the Dutch Research Council (NWO) Platform for Responsible Innovation (NWO-MVI). Her work relates to applications of AI in the military domain, reflecting on the implications of these emergent technologies for the fulfilment of obligations flowing from international humanitarian law and international human rights law.

Bio:

 

Taylor Woodcock is a PhD Candidate at the Asser Institute. Her research relates to military applications of artificial intelligence and the obligations that arise with respect to these technologies under international humanitarian law and international human rights law.

Add comment