Is Criminal Law Ready for Violence Committed by Autonomous Intelligence?

Is Criminal Law Ready for Violence Committed by Autonomous Intelligence?

Written by: Assoc. Prof. Dr. Igor Vuletić

When we speak about violence, regardless of its type and the breadth of our definition, we always imply the violence committed by humans. It is possible that humans use certain means in this act (such as animals, weapons or tools), but the essence remains the same: the source of violence is the willful act of a human.

However, today we live in a time of unprecedented technological advances. The technological industry is literally developing on a daily basis. Many innovations are introduced into various areas of everyday life, so the modern human society does not have any choice but to constantly adapt to modern living conditions. In this sense, the progressive development of autonomous technologies over the last decade is of particular significance.

Image source

At the very outset, there should be a distinction made between the concept of autonomous intelligence from the concept of robots. The term „robot“ was first used by the Bohemian writer Karel Čapek back in 1920 in his SF drama Rossumovi Univerzální Roboti, and it went on to become the universal term for all types of devices with a certain degree of independence in their operations. It presumes that the „robot“ has a certain embodiment i.e. that it has a physical form. On the other hand, the concept of autonomous intelligence is broader as it does not require a specific form, but it is any artificial shape capable of sensing the activities in its environment, processing the received information and adjusting its activities to such information (Calo, 2016). Such a definition would then extend to certain types of software. This broader term will be used in this text: autonomous intelligence.

In recent times, autonomous intelligence is slowly becoming dominant in various industries and it is becoming part of the daily life of people. This has many advantages (improving the quality of life and enables the faster and more efficient provision of services, especially in the medical field; Camarilo et. al., 2004) but it also has its disadvantages. One of the potential disadvantages is related to the issues of criminal law and the question of who is liable (i.e. is anybody liable at all) when an autonomous system performs an act of violence against a person or property. This issue is additionally complicated when such violence includes a larger number of victims, i.e. when its consequences are of a larger scope. This is why there were already over 60 different initiatives for placing AI in a certain ethical framework thus far (Mittelstadt, 2019). Before delving into this very significant and still unanswered question, we have to point out the potential areas in which there is a risk of such forms of violence. There are several such areas.

The first area is the military industry. Namely, the military industry has recently been developing new types of weapons that are capable of so-called autonomous warfare. This means that such weapons can select and destroy targets on their own, without human intervention. Such weapons significantly improve the effectiveness of warfare and decrease the number of victims (on the side using such weapons). Therefore, many leading military systems in the world have made the production of such weapons a top priority. However, what if the autonomous weapons commit an error in selecting the target and cause innocent casualties? For example, what if a drone with capacities for automatic warfare mistakenly identifies the enemy target and bombards a civilian target instead, resulting in numerous civilian human victims, or widespread property destruction? Similar cases from the not so distant military past show that such scenarios are not impossible. The potentially most gruesome one occurred in 1988, when the American radar system „Aegis“, whose purpose was the protection of battleships from aerial attacks confused an Iranian civilian airplane Iran Air 665 with a military aircraft and launched an antiaircraft rocket, causing the death of all 290 passengers and crew members (see details). Could this be considered a war crime and, more importantly, who is considered liable for such a crime if the system is autonomous? In the previous attempts of establishing an ethical framework for the responsible use of AI, the lack of a clear position regarding autonomous weapons received the most criticism (Russel, 2015) (see details). Could this be considered a war crime and, more importantly, who is considered liable for such a crime if the system is autonomous? In the previous attempts of establishing an ethical framework for the responsible use of AI, the lack of a clear position regarding autonomous weapons received the most criticism (Russel, 2015).

Furthermore, another potential source of danger lies in the operations of so-called social bots which could be denied as a „software application used to automatically generate messages, advocate ideas, act as a follower of users, and as a fake account to gain followers itself“. Literature warns that social bots can produce violence directly or indirectly. They can directly produce violence by disseminating hate speech, threatening messages and inciting violence against certain persons, or publishing false and defamatory contents. They can also indirectly incite violence by boosting the visibility of negative tweets or posts by intensively providing likes for negative content (King, Aggarwal, Taddeo and Floridi, 2019).

The threat of danger arising out of autonomous intelligence is very real in traffic as well. Namely, the automotive industry has intensively been developing self-driving cars, in which the driver is no longer the driver and becomes a passenger, i.e. which are capable of driving without a human presence in the vehicle. (Mrčela and Vuletić, 2018). Such cars are already in use in certain states in the USA.

Problems arise when such a car causes a traffic accident, which occurred several years ago in Arizona. There, a self-driving (autonomous) car in the so-called self-driving mode killed a pedestrian woman who was crossing the street at an unmarked area. The car sensors did not recognize the pedestrian as a human and the car continued to move at full speed which leads to the violent act and loss of life (S. Levin, J.-C. Wong, Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian).

Along with the examples described above, violent acts of autonomous intelligence are possible in other areas of life. Thus, the literature describes the role played by AI in potentially dangerous and violent areas, from drug trafficking and smuggling to the use of violent interrogation methods in cases of terrorism (King, Aggarwal, Taddeo and Floridi, 2019). The common denominator in all of these cases is the question from the title of this paper: who bears criminal liability in such cases, ie. is criminal law even ready for the challenges posed by the development of autonomous intelligence?

In attempts to answer this question, legal philosophers have dealt with the issue of the possibility of criminal liability of the autonomous systems themselves, considering the fact that criminal law has already shifted the boundaries of concepts in similar ways when it accepted the criminal liability of legal entities. In this sense, positions range from the extremely permissive – which recognizes the liability of autonomous intelligence – to the extremely restrictive – which emphatically denies such a possibility. (Brożek and Jakubinec, 2017). In this respect, we are closer to the restrictive positions, primarily because we do not see the purpose of punishing, nor the possibility of the adequate punishment of autonomous systems. Therefore, in our view, it is more vital to direct the discussion towards the criminal liability of persons (legal and natural) who are responsible for the functioning of a specific autonomous system.  

The issue raised with regard to the establishment of criminal liability of persons „behind the machine“ is complex and it comes down to two main questions. The first question relates to causality. It can be assumed that for example, that each military commander who ordered the use of autonomous weapons, as well as the programmer, producer and distributor will invoke the institute of the break of causality and claim that the „machine was properly programmed and that it committed an error which was autonomous, thus breaking the chain of causality. In other words, their liability in the sense of causality is reduced to the compliance with the instructions for the use and regular maintenance. The fact that the system makes autonomous decisions based on direct observations from its environment and the analysis of the collected information actually breaks the criminal causality in the normative sense. (Novoselec, 2016). The second issue is related to the type of liability and in this sense, there should be a distinction between typical intentional criminal offenses (such as war crimes) and typical negligent acts (such as traffic accidents). The issue with intent is the lack of the wilful nexus, which is manifested through consent to an act. Namely, how can the person behind the machine be condemned for consent if the initial goal pursued by the person was the direct opposite of the effect which had occurred? An even bigger issue than neglect, because it implies the „foreseeability “of the consequences, which is uncertain in cases involving autonomous intelligence. Besides these issues, there is the question of legal entities: a concept that has not been universally accepted yet. Considering the fact that such liability will be very significant in this area, it is clear that the existing inconsistencies could be a large obstacle.

Considering the elaborations above, it can be concluded that criminal law which is based on the traditional concept of criminal liability does not have the answers to the contested issues raised by the challenges of AI. It can be expected that the number of cases of violence committed by AI will increase, such as the described accident in Arizona or the bombardment of the Iranian airplane. Therefore, the existing concepts of causality, liability, and responsibility of legal entities will have to be reconsidered. These topics will have to be discussed at the regional and global level because criminal law has to follow the needs and standards of modern society.

References:

B. Brożek, M. Jakubinec. On the legal responsibility of autonomous machines, Artificial Intelligence and Law 25 (2017).

R. Calo, Robots in American Law, University of Washington School of Law Research Paper, No. 2016-04.

D. B. Camarilo, T. M. Krummel, J. K. Salisbury, Robotic technology in surgery: past, present, and future, The American Journal of Surgery 188 (2004).

T. C. King, N. Aggarwal, M. Taddeo, L. Floridi, Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions, Science and Engineering Ethics 1 (2019).

B. Mittelstadt, AI Ethics – To Principled to Fail?, University of Oxford – Oxford Internet Institute, 2019.

M. Mrčela, I. Vuletić, Criminal Law Facing Challenges of Robotics: Who Is Liable for Traffic Accident Caused by Autonomous Vehicle?, Collected Papers of Zagreb Law Faculty 3-4 (2018).

P. Novoselec, Opći dio kaznenog prava, Pravni fakultet Osijek, 2016.

S. Russel, Take a stand on AI weapons, Nature – International weekly journal of science 521 (2015).