AI Safety

The End of Human Judgment in the Kill Chain? Relocating Initiative and Interpretation with Agentic AI

New research argues LLM agents in warfare make meaningful human judgment impossible, challenging global governance.

Deep Dive

A new academic paper by philosopher Jovana Davidovic, published on arXiv, presents a stark ethical warning about the deployment of Large Language Model (LLM)-based agents in military operations. The research focuses on 'agentic AI'—systems with capacities for initiative, interpretation, goal-directedness, and dynamic memory—that are increasingly used for core battlefield functions like intelligence analysis, data fusion, and battle management. Davidovic argues that the very features that make these AI agents operationally attractive are the same ones that render 'context-appropriate human judgment and control substantively ineffectual' within the parts of the 'kill chain' (the process of identifying and engaging a target) where they operate.

By autonomously relocating initiative and interpretation, these systems displace human decision-making in a way that Davidovic contends is incompatible with the requirement for meaningful human judgment central to existing international governance frameworks, such as those proposed by the UN's Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE-CCW) and the REAIM summit. The paper draws on specific use cases to illustrate this fundamental conflict. It concludes that a subset of these agentic AI applications, particularly those deployed for data fusion and battle management in lethal contexts, 'cannot be used justifiably on the battlefield under current and foreseeable conditions.' The work ends by proposing two pathways for the international governance community to address this pressing challenge.

Key Points
  • LLM-based agents with initiative and memory are being integrated into battlefield intelligence and management systems.
  • The paper argues these AI features inherently displace human decision-making, making meaningful control impossible within governance rules.
  • Concludes such lethal applications are unjustifiable under current frameworks like GGE-CCW and calls for international policy action.

Why It Matters

This research challenges the ethical foundation of deploying advanced AI in warfare, pushing for urgent international policy reform.