New military technologies are transforming the contemporary battlefield, raising complex ethical and legal questions previously unaddressed. This Article makes three novel contributions to the debate on Autonomous Weapon Systems (AWS) and military AI in the legal and ethical literature. First, it puts forward a normative argument against AWS—even if they outperform humans in adhering to the rules governing the conduct of hostilities. This argument is grounded in the critical importance of the human capacity to act over and beyond the strict letter of the law. The Article contends that this capacity is central to the regulation of warfare, which permits, rather than obligates, the use of force against legitimate targets. Second, it offers a doctrinal analysis of International Humanitarian Law (IHL) and International Human Rights Law (IHRL)—the two principal legal regimes that regulate armed conflicts under international law—providing a fresh perspective on how they intersect in the context of AWS. Finally, the Article explores the extent to which its normative argument is persuasive in the context of military AI beyond AWS, an area that is rapidly evolving and already extensively employed in current conflicts. It examines the similarities and differences between these emerging technologies and reflects on their implications for the desirable regulation of both technologies.