Invictus Maneo — Mortem Nescio

RP Proposal Draft

Killer robots, long believed to be a thing of science-fiction, may not be science-fiction for much longer. The explosion of artificial intelligence over the past few years has caught the interest of militaries across the world. While remotely controlled weapon systems, such as the CROWS or UAVs, have existed for decades, those systems are still dependent on human input. Even newer systems still rely on keeping a human in the loop—programmable drones, smart weapons, and the like are dependent on people for clearance authority or to interpret what it is seeing. Autonomous Weapon Systems (AWS) seek to take humans out of the equation and give these weapons the “autonomy” to engage their targets as they see fit. This is both terrifying and terrifyingly irresponsible. Clearance authority is an incredible power and one that is highly regulated within the military. Giving this power to machines which are capable of making mistakes and lack the discernment of humans is a mistake. Furthermore, machines don’t have a conscious like human beings do. Military personnel have the right and a duty to refuse lawful orders if they believe it violates International Humanitarian Law. Do we really want to live in a world where tyrants can field unmanned armies capable of committing atrocities without a second thought?

Remote weapon systems were everywhere in Somalia. As a result, the current state of AI usage in military technology is one that I have a good deal of personal experience with. That being said, there is no shortage of studies that explore the ethical implications and current capabilities of autonomous weapon systems. I think firsthand and secondhand accounts in conflicts where drone warfare has become ubiquitous (such as Ukraine, Syria, or Azerbaijan) would also be worth looking into. All in all, I should have an abundance of information to work with.

« »