1st Edition

Responsible Use of AI in Military Systems

Edited By Jan Maarten Schraagen Copyright 2024
    386 Pages 11 B/W Illustrations
    by Chapman & Hall

    Artificial Intelligence (AI) is widely used in society today. The (mis)use of biased data sets in machine learning applications is well‑known, resulting in discrimination and exclusion of citizens. Another example is the use of non‑transparent algorithms that can’t explain themselves to users, resulting in the AI not being trusted and therefore not being used when it might be beneficial to use it.

    Responsible Use of AI in Military Systems lays out what is required to develop and use AI in military systems in a responsible manner. Current developments in the emerging field of Responsible AI as applied to military systems in general (not merely weapons systems) are discussed. The book takes a broad and transdisciplinary scope by including contributions from the fields of philosophy, law, human factors, AI, systems engineering, and policy development.

    Divided into five sections, Section I covers various practical models and approaches to implementing military AI responsibly; Section II focuses on liability and accountability of individuals and states; Section III deals with human control in human‑AI military teams; Section IV addresses policy aspects such as multilateral security negotiations; and Section V focuses on ‘autonomy’ and ‘meaningful human control’ in weapons systems.

    Key Features:

    • Takes a broad transdisciplinary approach to responsible AI
    • Examines military systems in the broad sense of the word
    • Focuses on the practical development and use of responsible AI
    • Presents a coherent set of chapters, as all authors spent two days discussing each other’s work

    This book provides the reader with a broad overview of all relevant aspects involved with the responsible development, deployment and use of AI in military systems. It stresses both the advantages of AI as well as the potential downsides of including AI in military systems.

    Preface

    Acknowledgements

    Editor

    Contributors

    1 Introduction to Responsible Use of AI in Military Systems

    Jan Maarten Schraagen

    SECTION I Implementing Military AI Responsibly: Models and Approaches

    2 A Socio‑Technical Feedback Loop for Responsible Military AI Life‑Cycles from Governance to Operation

    Marlijn Heijnen, Tjeerd Schoonderwoerd, Mark Neerincx, Jasper van der Waa, Leon Kester, Jurriaan van Diggelen, and Pieter Elands

    3 How Can Responsible AI Be Implemented?

    Wolfgang Koch and Florian Keisinger

    4 A Qualitative Risk Evaluation Model for AI‑Enabled Military Systems

    Ravi Panwar

    5 Applying Responsible AI Principles into Military AI Products and Services: A Practical Approach

    Michael Street and Sandro Bjelogrlic

    6 Unreliable AIs for the Military

    Guillaume Gadek

    SECTION II Liability and Accountability of Individuals and States

    7 Methods to Mitigate Risks Associated with the Use of AI in the Military Domain

    Shannon Cooper, Damian Copeland, and Lauren Sanders

    8 ‘Killer Pays’: State Liability for the Use of Autonomous Weapons Systems in the Battlespace

    Diego Mauri

    9 Military AI and Accountability of Individuals and States for War Crimes in the Ukraine

    Dan Saxon

    10 Scapegoats!: Assessing the Liability of Programmers and Designers for Autonomous Weapons Systems

    Afonso Seixas Nunes, SJ

    SECTION III Human Control in Human–AI Military Teams

    11 Rethinking ‘Meaningful Human Control’

    Linda Eggert

    12 AlphaGo’s Move 37 and Its Implications for AI‑Supported Military Decision‑Making

    Thomas W. Simpson

    13 Bad, Mad, and Cooked: Moral Responsibility for Civilian Harms in Human–AI Military Teams

    S. Kate Devitt

    14 Neglect Tolerance as a Measure for Responsible Human Delegation

    Christopher A. Miller and Richard G. Freedman

    SECTION IV Policy Aspects

    15 Strategic Interactions: The Economic Complements of AI and the Political Context of War

    Jon R. Lindsay

    16 Promoting Responsible State Behavior on the Use of AI in the Military Domain: Lessons Learned from Multilateral Security Negotiations on Digital Technologies

    Kerstin Vignard

    SECTION V Bounded Autonomy

    17 Bounded Autonomy

    Jan Maarten Schraagen

    Index

    Biography

    Jan Maarten Schraagen is Principal Scientist at TNO, The Netherlands. His research interests include human-autonomy teaming and responsible AI. He is main editor of Cognitive Task Analysis (2000) and Naturalistic Decision Making and Macrocognition (2008) and co-editor of the Oxford Handbook of Expertise (2020). He is editor in chief of the Journal of Cognitive Engineering and Decision Making. Dr. Schraagen holds a PhD in Cognitive Psychology from the University of Amsterdam, The Netherlands.