Autonomy of Military Robots: Assessing the Technical and Legal ('Jus in Bello') Thresholds

The John Marshall Journal of Information Technology & Privacy Law, Volume 32, Issue 2 (2016), pp 57-88

33 Pages Posted: 5 May 2015 Last revised: 31 May 2023

Date Written: May 4, 2015

Abstract

While robots are still absent from our homes, they have started to spread over battlefields.

However, the military robots of today are mainly remotely controlled platforms with no genuine autonomy. This paper will disclose the obstacles in implementing autonomy for such systems by answering a technical question: "What level of autonomy is needed in military robots and how and when might it be achieved?" followed by a techno-legal one; "How to implement the rules of humanitarian law within autonomous fighting robots, in order to allow their legal deployment?"

The first chapter scrutinizes the significance of autonomy in robots and the metrics used to quantify it, which the US Department of Defense developed.

The second chapter focuses on the autonomy of 'state-of-the-art' robots (e.g., Google's self-driving car, DARPA's projects, etc.) for navigation, ISR or lethal missions. We will get a hint of the architectures, functioning, thresholds and technical limitations of such systems based on public information. The bottleneck to a higher autonomy of robots seems to be their poor 'perceptive intelligence.

The last chapter looks at the requirements of humanitarian law (rules of 'jus in bello’/rules of engagement) for the legal deployment of autonomous lethal robots on the battlefields.

The legal and moral reasoning of human soldiers, complying with humanitarian law, is a complex cognitive process which must be emulated by autonomous robots that could make lethal decisions. However, the autonomous completion of such 'moral' tasks by artificial agents is much more challenging than the autonomous implementation of other tasks, such as navigation, ISR or kinetic attacks.

Given the limits of current Artificial Intelligence, it is doubtful that robots will soon acquire such 'moral' capabilities. Therefore, for the time being, autonomous weapon systems might be legally deployed, but only in very particular circumstances where humanitarian law requirements are irrelevant.

Keywords: artificial intelligence (AI), limits of AI, robots, autonomous weapons, unmanned aerial vehicles ( UAVs), laws of war, proportionality, discrimination, weapon autonomy, jus in bello, International Law of Armed Conflict, UCAV, UMV, UGV,navigation of robots, lethal use of robots, ISR and robots

Suggested Citation

Titiriga, Remus, Autonomy of Military Robots: Assessing the Technical and Legal ('Jus in Bello') Thresholds (May 4, 2015). The John Marshall Journal of Information Technology & Privacy Law, Volume 32, Issue 2 (2016), pp 57-88 , Available at SSRN: https://ssrn.com/abstract=2602160 or http://dx.doi.org/10.2139/ssrn.2602160

Remus Titiriga (Contact Author)

INHA University ( email )

100 Inharo, Nam-gu
Incheon, 402-751
Korea, Republic of (South Korea)

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
364
Abstract Views
1,938
Rank
151,600
PlumX Metrics