Filip-Andrei LARIU[1]
LLM student, Leiden University
Abstract: The objective of this study is to emphasise the distinction between instances where autonomous weapons systems (AWS) are used. The article is structured on the dichotomy of law enforcement operations and armed conflict respectively, pointing out the differences in legal norms applied in each instance when autonomous weapons are used. While the technology is relatively new, there have been several authors, especially within the framework of international organisations, who have written on the subject. However, few, if any, have emphasised the parallels between International Humanitarian Law and International Human Rights Law with regards to the use of autonomous weapons systems. This study aims at stressing the importance of correctly classifying the circumstances where AWS are used and how these circumstances impact the legality of such use.
Key-words: Autonomous Weapons Systems; International Humanitarian Law; International Human Rights Law; Armed Conflict; Law Enforcement Operation.
1. Introduction
The use of artificial intelligence in war and law enforcement is not a scenario exclusive to science fiction anymore. Serious advancements in this field have created a global tendency to eliminate the human factor from war and rely predominantly on machinery. This has a practical explanation to it, since it is less costly, both in terms of resources and time, and governments are less likely to experience the negative pressure of war-weariness due to high casualties. Even in domestic law enforcement, the use of artificial intelligence is being increasingly favoured, particularly in surveillance. This global trend, although beneficial in practical terms, poses some serious legal questions that ought to be clarified before the use of dangerous weaponry may cause grave law violations and permanent harm to individuals.
2. Definition
While there is no international consensus on what Autonomous Weapons Systems (‘AWS’) are, for the purpose of this article AWS will be defined as “weapons systems that are characterised by varying degrees of autonomy in the critical functions of acquiring, tracking, selecting, and attacking targets; and, to some extent or even fully, removal of human involvement from the decision-making process to use lethal force”.[2]
3. Law Enforcement vs Armed Conflict
An essential distinction that must be made before delving further into the subject regards the circumstances in which AWS are to be employed. The importance of this distinction lies in establishing what rules of law are applicable. While International Humanitarian Law (‘IHL’) governs instances of armed conflict, in cases of law enforcement, the governing law is human rights law. The latter is a stricter set of provisions, limiting the use of force against persons as much as possible. Humanitarian law has a more permissive approach to the use of lethal force. Thus, finding out the particularities of the instances when AWS are used is inextricably bound to the analysis of the legality of AWS. In other words, in order to find out the criteria one should use to assess the legality of AWS, one must first verify if that particular case is an armed conflict or a law enforcement operation.
The Geneva Conventions offer a definition of armed conflict in Article 2: cases of declared war or of any other armed conflict which may arise between two or more states, and partial or total occupation of the territory of a state. These provisions, together with Article 1 of Additional Protocol I to the Geneva Conventions[3], define the concept of international armed conflict. This type of conflict is what most people associate with the conventional war: war between states or wars of liberation.
There is, however, also another, more ‘subtle’ kind of armed conflict, which is the non‑international armed conflict. The concept has been defined as a “situation in which there is protracted resort to armed force between governmental authorities and organised armed groups”.[4] In assessing the level of organisation, one needs to take into consideration whether the combatants are organised as a proper military force. Among the factors considered are the possession of a “command structure and disciplinary rules and mechanisms within the group,” the “ability to plan, coordinate and carry out military operations, including troop movements and logistics,” “define a unified military strategy and use military tactics,” and “to speak with one voice and negotiate and conclude agreements”.[5] Moreover, next to the condition that the opposing belligerent must be an organised armed group, a non-international armed conflict depends on whether the intensity of violence is such that has surpassed a certain threshold of “protraction”.[6]
Non-international armed conflicts are the ones that pose the most problems in practice, since they are harder to distinguish from cases of law enforcement. An example where the lines were somewhat blurred was the La Tablada Case, where the Inter-American Commission of Human Rights ruled that an attack on an Argentinian military base that lasted 30 hours was an armed conflict, and not just an internal disturbance[7]. A Commission of Experts convened by the International Committee of the Red Cross reinforced the decision with the following observation: “The existence of an armed conflict is undeniable, in the sense of Article 3, if hostile action against a lawful government assumes a collective character and a minimum of organisation.”[8]
4. Autonomous Weapons Systems in Armed Conflicts
AWS are not specifically regulated by any IHL Treaty. However, a first set of criteria under the scrutiny of which they fall is that governing any new weapon. Article 36 of the Additional Protocol I to the Geneva convention sets the obligation for states to determine if the employment of a new weapon is prohibited any rule of international law. Below are several principles derived from IHL that must be discussed in relation to a new weapons system, such as AWS.
When employing lethal force, combatants need to act respecting the principles of proportionality and necessity.[9] The former requires that the harm to civilians must not be excessive relative to the expected military gain,[10] while necessity refers to the decision to resort to force as a last resort.[11] With regard to these two principles, an argument in favour of AWS is human error. Although trained, human beings need to decide on the extent of used force under conditions of extreme stress. Often, feelings such as hunger, fear, or hate being added to the general stress of finding oneself in the midst of battle. For these reasons, human judgement may be seriously impaired, and the combatants may act more according to their instincts, rather than rational decisions.[12] During the war between Iran and Iraq in 1987, 37 sailors died as a result of such human error. The USS STARK, part of U.S. forces acting in support of Iraq failed to identify a missile threat from an Iraqi fighter jet that mistook the STARK for an Iranian ship.[13] Moreover, as an overreaction of human error following the tragic event, the USS VINCENNES shot down an Iranian civilian Airbus A300 killing everyone onboard.[14] For obvious reasons, none of this applies to AI. Instead, it would be able to simply select and engage targets irrespective of any physiological limitations. Therefore, actions of AWS prove to be more effective, less biased, and generally more efficient in producing the intended outcome regardless of external factors. This applies if we simply accept the prerequisite that feelings such as compassion or empathy ought not to play any part in assessing proportionality, which would thus exclusively rely on a clear and objective set of criteria to evaluate the level of impending danger and to act accordingly.
Other authors are against this theory, stating that when assessing proportionality, apart from a set of rational criteria, human beings employ an array of emotional and subjective standards, such as intuition, empathy and mercy.[15] Humans are “capable of morally praiseworthy and supererogatory behaviour, exemplified by (for example) heroism in battle, something that machines may not be capable of”.[16] A further argument in favour of the necessity of subjective criteria in assessing proportionality is the very existence of the Martens Clause,[17] which states that the person remains under the protection of the principles of humanity and the dictates of the public conscience. For this reason, it can also be concluded that it is impossible for AWS to be programmed to respect the principle of humanity and the public conscience, these matters being intrinsic exclusively to the human being and its judgement.
Another legitimate concern regarding AWS is the lack of human supervision and control. Autonomous weapons manage to function entirely on their own, meaning that any human intervention would, theoretically, be superfluous, causing an unnecessary delay and affecting the efficacy of the weapon. Yet the absence of human intervention and control comes with its own issues in the field of accountability.[18] Authors have called for a certain degree of human interference not only in the operation phase of the weapons but also during the development and activation stage of AWS.[19] Furthermore, some countries that have the capability to develop AWS require that the possibility of human intervention in the acts of such weapons be an imperative.[20]
On the other hand, an argument in favour of AWS is that States have both a legal and a moral obligation to employ the least destructive method when engaged in armed conflicts. Because AWS may prove to be more reliable than human military personnel and may be programmed to be far less prone to use lethal force, not deploying and using this technology would mean creating an additional risk for more persons. As such, a theory suggests that using humans for such a dangerous mission when an alternative, far more effective, technology is readily available must be considered a disregard for human life. This approach derives or is an extension of the principle of limiting unnecessary suffering (or superfluous injury).[21] IHL aims to limit the amount of destruction and suffering to only what is necessary, and the use of the AWS instead of human soldiers may well comply with the spirit of this principle.
5. Autonomous Weapons Systems in Law Enforcement Operations
In cases of law enforcement, the use of force is subject to stricter rules than those applicable in the context of an armed conflict. Governed by the rules of human rights law, the provisions of which are covered by both international treaties and national legislation, the discussion is too broad to be covered in this work. However, there are some issues and risks that need to be highlighted. One is about the use of lethal force by AWS, and the second is the danger posed by machine learning.
Law enforcement officials are deemed to “have a vital role in the protection of the right to life, liberty and security of the person”.[22] As such, any limits to this obligation must abide by the principles of legality, strict necessity, proportionality and precaution.[23] There are very few instances where AWS employment of lethal force can be deemed a matter of strict necessity. For it to be so, the targeted person must pose a direct threat to law enforcement officials or other persons[24]. In fact, in the event AWS employ lethal force outside of these circumstances, the act could be considered an extrajudicial execution because it was not done in self-defence, nor was it another form of strict necessity. The decision to kill is based on a set of predetermined objective criteria, artificial intelligence by itself having no instinct of self‑preservation.
Another risk associated with AI is machine learning, which is the ability of computer systems to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyse and draw inferences from patterns in data. This technology has the potential and the freedom to actively transform and adapt its source code based on the data it is being fed. For this reason, there is a risk that the data being uploaded to AWS in the development phase, is biased. As a result, the program itself would reach biased and incorrect conclusions,[25] that would cause it to make wrong decisions with terrible consequences.
6. Conclusions
Autonomous weapons are becoming a reality both in armed conflicts and law enforcement operations. The applicable rules of law differ according to specific circumstances. Cases of armed conflict are governed by international humanitarian law, while law enforcement must abide by the rules of human rights law. In humanitarian law, the main issue regards proportionality and lies in reconciling objective criteria with humane perspectives in decision-making. Regarding law enforcement operations, the problems that arise are related to the legality of the use of lethal force and the risks associated with machine learning. All things considered, there is no definitive answer for the legality of Autonomous Weapons Systems. Their prospective implementation will have to respect a series of rules and take some problematic elements into consideration.
[1] LLM student in Public International Law, Leiden University, Leiden, the Netherlands. Filip is a law graduate of Babeș-Bolyai University, Cluj-Napoca, Romania. He was a member of the UBB College for Advanced Performance Studies. He also took part in various international competitions and events, the most notable of which are two consecutive editions of the Philip C. Jessup International Law Moot Court Competition, qualifying in 2020 for the international rounds. E-mail address: filiplariu@gmail.com. The opinions expressed in this paper are solely the author’s and do not engage the institution he belongs to.
[2] ICRC Expert Meeting, “Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons” (2016); Ozlem Ulgen, “Definition and Regulation of LAWS”, UN GGE LAWS (2018).
[3] “Armed conflicts in which peoples are fighting against colonial domination and alien occupation and against racist regimes in the exercise of their right of self-determination.”
[4] International Law Commission, “Draft Articles on the Effects of Armed Conflicts on Treaties“, Yearbook of the International Law Commission, Vol. II, Part Two (2011) (‘ILC Draft Articles’), Art 2(b); See also Prosecutor v Tadic ICTY-94-1-A (Judgment) (2 October 1995) [70].
[5] Prosecutor v. Haradinaj, ICTY IT-04-84-T (Judgment) (3 April 2008) [60].
[6] ILC Draft Articles [8].
[7] Inter-American Commission on Human Rights, Report No. 55/97, Case No. 11.137: Argentina, OEA/ Ser/L/V/II.98, Doc. 38, (6 December 1997) [55].
[8] ICRC, “Reaffirmation and Development of the Laws and Customs Applicable in Armed Conflict” (May 1969), p. 99.
[9]Ion Galea, “La riposte a une cyberattaque terroriste releve-t-elle du paradygme du conflit arme ?”, Ed. Pedone, 2017, pp. 263-283.
[10] Additional Protocol I to the Geneva Conventions (‘Additional Protocol 1’), Art. 51(5)(b).
[11] Mary Ellen O’Connell, “Unlawful Killing with Combat Drones A Case Study of Pakistan, 2004-2009“, Notre Dame Legal Studies Paper No. 09-43, (2009) available https://ssrn.com/abstract=1501144 accessed 18 January 2021, p. 19.
[12] Ronald C. Arkin, “Ethical Robots in Warfare“, Georgia Institute of Technology (2009), available http://www.cc.gatech.edu/ai/robot-lab/online-publications/arkin-rev.pdf; accessed 18 January 2021.
[13] “Officer Errors Reportedly Left USS Stark Vulnerable“, Chicago Tribune (1 June 1987), available http://articles.chicagotribune.com/1987-06-01/news/8702100123_1_sea-skimming-radar-warning-receiver-exocet accessed 18 January 2021.
[14] George C. Wilson, “Navy Missile Downs Iranian Jetliner“, Washington Post (4 July 1988), http://www.washingtonpost.com/wp‑srv/inatl/longterm/flight801/stories/july88crash.htm accessed 18 January 2021.
[15] Christof Heyns, “UN Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions“, A/HRC/23/47, (2013) (‘Heyns 2013’), par. 5.
[16] Ryan Tonkens, “The Case Against Robotic Warfare: A Response to Arkin“, Journal of Military Ethics, Vol. 11, (2012) pp. 149, 151.
[17] Additional Protocol I, Art 1 (2).
[18] Elena Lazar, “Juridiction et cybercriminalité”, revue TIC, Innovation et droit international, Ed. Pedone, 2017, pp. 157-175.
[19] Neil Davison, “A legal perspective: Autonomous weapon systems under international humanitarian law” (2018) available https://www.icrc.org/en/document/autonomous-weapon-systems-under-international-humanitarian-law accessed 18 January 2021.
[20] U.S. Department of Defense, DoD Directive 3000.09, “Autonomy in Weapon Systems” (2012), Art 3(a). See also Heyns 2013 par. 45.
[21] International Court of Justice, “Legality of the Threat or Use of Nuclear Weapons” (Advisory Opinion) (8 July 1996) par. 78; Additional Protocol I Art 35.
[22] United Nations, “Basic Principles on the Use of Force and Firearms by Law Enforcement Officials” (adopted September 1990) UN Doc A/CONF.144/28/Rev.1, 112 (‘UN Basic Principles Law Enforcement’), preamble.
[23] UN Basic Principles Law Enforcement, principle 5.
[24] Human Rights Committee, “Suarez de Guerrero v Colombia, Views“, Comm no R.11/45, Supp No. 40 (A/37/40) (9 April 1981) par. 137.
[25] Tim Jones “Machine learning and bias”, (2019) available https://developer.ibm.com/technologies/machine-learning/articles/machine-learning-and-bias/ accessed 18 January.