Lethal Autonomous Weapon Systems and the Potential of Moral Injury

Abstract

In the history of human war, technology has been developed and deployed to increasingly distance combatants from the enemy. In today’s modern warfare, the development of lethal autonomous weapon systems (LAWS) represents a new era as artificial intelligence (AI)-enabled weapons may make lethal decisions apart from the final judgment and input of human combatants. While there has been significant discussion on the ethical, moral, and legal dimensions of LAWS, to date, there has been no comprehensive study on the potential morally injurious effects upon those who deploy such weapon systems at the tactical level of war. This dissertation presents conceptual analyses of LAWS and moral injury (MI) and an analytical exploration of ways in which LAWS may potentially contribute to MI in warfighters who deploy them via: (1) violations of the Law of Armed Conflict via algorithmic errors; (2) violations of human dignity as a machine makes the final decision to use lethal force on a human being; (3) automation and confirmation biases in trusting the technology even if other contradictory information is available; (4) opacity of AI decision-making; and (5) moral displacement as operators may attempt to displace decisional moral responsibility to LAWS. The study concludes with recommendations and areas for further research.

Disciplines

Artificial Intelligence and Robotics | Ethics and Political Philosophy | Mental and Social Health | Military and Veterans Studies | Robotics

Department

Humanities (HUM)

First Advisor

Timothy Demy

Second Advisor

Daniel M. Cowdin

Date of Award

Fall 12-2024

Third Advisor

Timothy S. Mallard

Document Type

Dissertation

Degree Name

Ph.D.

Share

COinS