Algorithms, Automation and Accountability: Imagining Responsibility for the Crimes of Machines

The following is a guest post by Masoud Zamani is a lecturer in international law and international relations at the University of British Columbia. His research focuses on the intersection of emerging technologies and international legal accountability.

(Image by Tavis Coburn / Scientific American)

In recent years, the growing discourse on the regulation of lethal autonomous weapon systems (LAWS) has brought renewed attention to the question of individual criminal responsibility for acts committed by such systems. While theoretical debates have long grappled with whether individual criminal liability can arise in connection with the conduct of LAWS, more focused discussions on the specific modes of responsibility are now beginning to take shape— and at an accelerating pace.

International criminal law (ICL), shaped historically by a fundamentally anthropocentric orientation, has largely operated on the assumption that crimes are committed by human agents capable of intent, knowledge, and control. As such, traditional modes of individual responsibility, such as ordering, planning, instigating, aiding and abetting, and command responsibility, must now evolve to address the complexities introduced by autonomous warfare.

This post explores the current legal framework surrounding command responsibility under international criminal law and examines emerging conceptual pathways for adapting or extending existing modes of liability to account for the unique challenges posed by LAWS.

Attribution Without a Human Agent?

The legal attribution of acts committed by lethal autonomous weapon systems signifies a pressing challenge in ICL, particularly when attempting to establish individual criminal responsibility. As Neil Davison has observed, the central difficulty arises from the weapon’s capacity to select and attack targets independently, thereby inserting a non-human agent between human intent and the use of force. This disrupts the conventional model of attributing responsibility in ICL, which presumes a linear chain of causality between human conduct and harm.

A key concern, increasingly recognized in the literature, is that attribution and causation are not necessarily synonymous. While causation refers to the factual chain of events, attribution is a normative process—it determines who ought to bear legal responsibility. In cases involving LAWS, even if a commander’s or programmer’s actions contribute causally to an attack, legal responsibility still requires a demonstrable nexusin the form of intent, knowledge, or recklessness. That burden becomes especially difficult to meet when machine behavior is unpredictable due to algorithmic learning, misinterpretation of sensor input, or simply technical malfunction. As Rebecca Crootof remarks, the greater the complexity of an autonomous weapon system, the higher the likelihood of unpredictable behavior and unforeseen consequences. This exacerbates the accountability gap, where serious violations of international humanitarian law (IHL) occur, but no human actor can be said to meet the legal threshold for criminal liability.

Command Responsibility in an Era of Machine Autonomy

The modern contours of command responsibility find their roots in the 1945 trial of General Tomoyuki Yamashita, where liability was imposed based on his failure to prevent atrocities committed by his troops. The tribunal’s reasoning marked a turning point: a commander could be held accountable not only for what he ordered, but also for what he failed to prevent. This logic of responsibility through omission would later echo through the evolution of ICL. It informed the drafting of Article 86 of Additional Protocol I to the Geneva Conventions, and ultimately crystallized in Article 28 of the Rome Statute. There, the doctrine took a more structured form, grounded in three cumulative elements: (1) the existence of a superior-subordinate relationship; (2) actual or constructive knowledge of the crimes; and (3) a failure to take all necessary and reasonable measures to prevent or repress those crimes.

In the context of LAWS, however, these conditions become increasingly strained. As Alessandra Spadaronotes, it is particularly difficult to establish a superior–subordinate relationship when dealing with autonomous weapon systems. These systems impose significant limitations on attributing responsibility to commanders under ICL. As I will explain further below, commanders often lack the technical expertise to fully anticipate how autonomous systems will behave in combat. This limitation makes it difficult to apply the “should have known” standard—an essential part of command responsibility—because it presumes a level of understanding and foreseeability that may no longer be realistic in machine-led warfare. When commanders neither control nor fully understand the autonomous systems they deploy, the very concept of “constructive knowledge” risks losing its doctrinal coherence. LAWS operate along a spectrum: “human-in-the-loop” systems require human authorization for lethal force; “human-on-the-loop” systems permit autonomous execution under human supervision; and “human-out-of-the-loop” systems function with no human input once activated. The latter category most severely compromises accountability, as it becomes nearly impossible to demonstrate that a commander exercised “effective control” as required by the Rome Statute

Technical complications exacerbate the problem. Multiple actors—engineers, military planners, operators, AI trainers—may all play roles in the development and deployment of the system, diluting responsibility. The so-called “black box” problem further complicates the issue. In many advanced AI systems, not even developers can fully explain how a given output was generated. 

A deeper legal obstacle lies in Article 30 of the Rome Statute, which requires both intent and knowledge to establish criminal responsibility. This strict interpretation—excluding recklessness or dolus eventualis (referring to situations where a person foresees the possibility of a harmful outcome and accepts that risk)—was reaffirmed in the Katanga case, where the ICC declined to expand criminal liability based on foreseeability or risk acceptance. The result is a narrow mental threshold that is increasingly misaligned with the complexity of autonomous military operations. Marta Bo has highlighted how this interpretive rigidity entrenches a “responsibility gap” in LAWS deployments. Even when unlawful outcomes are foreseeable, the absence of an identifiable actor with specific intent may preclude prosecution.

Legal Innovation: Indirect Perpetration and Algorithmic Intent

To address the growing problem of an accountability vacuum, legal scholars have proposed a series of innovative frameworks. Paola Gaeta advocates for adapting the doctrine of indirect perpetration, positing that LAWS could function as “unwitting intermediaries” through which human actors commit crimes. In this model, liability attaches if the individual in question exercised sufficient control over the circumstances and foresaw the likely outcomes.

Luciano Floridi similarly proposes a model of strict or risk-based collective liability for those involved in the design, deployment, or supervision of AI systems, provided there was awareness of foreseeable harm. Gabriel Hallevy goes further, suggesting that certain systems may exhibit “algorithmic mens rea”—patterns of behavior embedded in code that mirror human intent, particularly when protected persons are systematically targeted. In this model, AI-enabled systems are treated as capable of bearing criminal liability akin to corporations. 

Alice Giannini introduces a further layer: systemic indifference to civilian harm embedded in design or oversight structures may serve as a surrogate for intent. Marta Bo emphasizes supervisory negligence and reckless reliance on opaque systems as grounds for liability. Jonathan Kwik adds a pragmatic dimension to this conversation, recommending rigorous technical training for commanders as a way to mitigate “generic risk-taking”—that is, decision-making based on vague awareness of potential harm rather than a concrete understanding of system limitations.

Taken together, these perspectives converge on the need to expand legal responsibility beyond traditional actors and to include those whose roles are often upstream or indirect.

Meaningful Human Control and the Traceability Imperative

Beyond doctrinal theory, the notion of “meaningful human control” (MHC) has gained traction as a possible normative safeguard. For human control to qualify as meaningful, it must tangibly influence the system’s behavior—for instance, through the ability to redefine mission parameters or abort an operation (p.6). This definition is echoed in various United Nations and International Committee of the Red Cross documents and serves as a minimum baseline for lawful deployment.

As I have argued elsewhere, however, MHC is more persuasive as a structural requirement than as a substitute for legal doctrine. That is, it should ensure the continuous presence of human actors in the decision-making loop, rather than serve as a vague reference to “oversight.” In operational terms, this means embedding legal accountability into the very architecture of deployment, oversight, and post-hoc review.

One possible mechanism to support this vision is the use of Advance Control Directives (ACDs). These aim to document a commander’s legal and operational intent prior to deployment, thereby enhancing traceability and supporting retrospective evaluations of precaution and legality. As Kate Devitt argues, ACDs provide a transparent record of instruction that can inform legal assessments. Yet ACDs are not a panacea. They assume that machine-learning systems will reliably interpret directives in unpredictable environments—an assumption that, so far, lacks empirical support. As Devitt notes, even well-intentioned ACDs can be misinterpreted or overridden by machine logic, severing the link between human will and autonomous action. Therefore, while ACDs enhance transparency, they cannot be a substitute for real-time human supervision or broader legal reform.

Conclusion: Rebuilding Legal Coherence in the Age of the Algorithm

At the heart of ICL lies a foundational belief: that individuals bear responsibility for serious violations of the laws of war. Autonomous weapons systems do not negate this principle, but they do strain its doctrinal scaffolding in unprecedented ways. It is difficult to ignore the emergence of a new and deeply troubling dilemma: the risk of non-accountability for a new class of violence. This challenge demands both jurisprudential innovation and operational reform. While concepts like meaningful human control and mechanisms such as Advance Control Directives (ACDs) offer partial safeguards, they must be situated within a broader legal architecture—one that acknowledges the complexity of modern conflict yet refuses to relinquish the human anchor of legal responsibility. To preserve the integrity of law in an era of algorithmic warfare, we must ensure that accountability does not become the first casualty of autonomy. 

Unknown's avatar

About Mark Kersten

Mark Kersten is an Assistant Professor in the Criminology and Criminal Justice Department at the University of the Fraser Valley in British Columbia, Canada, and a Senior Consultant at the Wayamo Foundation in Berlin, Germany. Mark is the founder of the blog Justice in Conflict and author of the book, published by Oxford University Press, by the same name. He holds an MSc and PhD in International Relations from the London School of Economics and a BA (Hons) from the University of Guelph. Mark has previously been a Research Associate at the Refugee Law Project in Uganda, and as researcher at Justice Africa and Lawyers for Justice in Libya in London. He has taught courses on genocide studies, the politics of international law, transitional justice, diplomacy, and conflict and peace studies at the London School of Economics, SOAS, and University of Toronto. Mark’s research has appeared in numerous academic fora as well as in media publications such as The Globe and Mail, Al Jazeera, BBC, Foreign Policy, the CBC, Toronto Star, and The Washington Post. He has a passion for gardening, reading, hockey (on ice), date nights, late nights, Lego, and creating time for loved ones.
This entry was posted in Drones, Guest Posts, International Criminal Court (ICC), International Criminal Justice, lethal autonomous weapon systems, War crimes and tagged . Bookmark the permalink.

1 Response to Algorithms, Automation and Accountability: Imagining Responsibility for the Crimes of Machines

  1. Pingback: Algorithms, Automation and Accountability: Imagining Responsibility for the Crimes of Machines - Masoud Zamani

Leave a comment