Browse by author
Lookup NU author(s): Dr Tanya Krupiy
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
The article addresses how the rules of targeting regulate lethal autonomous robots. Since the rules of targeting are addressed to human decision-makers, there is a need for clarification of what qualities lethal autonomous robots would need to possess in order to approximate human decision-making and to apply these rules to battlefield scenarios. The article additionally analyses state practice in order to propose how the degree of certainty required by the principle of distinction may be translated into a numerical value. The reliability rate with which lethal autonomous robots need to function is identified. The article then analyses whether the employment of three categories of robots complies with the rules of targeting. The first category covers robots which work on a fixed algorithm. The second category pertains to robots that have artificial intelligence and that learn from the experience of being exposed to battlefield scenarios. The third category relates to robots that emulate the working of a human brain.
Author(s): Krupiy T
Publication type: Article
Publication status: Published
Journal: Melbourne Journal of International Law
Print publication date: 30/07/2015
Acceptance date: 01/01/2015
ISSN (print): 1444-8602
ISSN (electronic): 1444-8610
Publisher: University of Melbourne