“Heavily armed robots turning a city into a war zone, stalking the streets and killing soldier and civilian alike.”
This isn’t the plot of an Arnold Schwarzenegger film. It’s a vision of what the future might be like, according to a report by scientific journal Nature on the future of Lethal Autonomous Weapons Systems (LAWS) – armed automatons with the ability to select and attack targets, without human intervention.
While killer robots sound like science fiction, the contributors to Nature broke down the technological requirements into two main components: the location and hazard perception seen in prototype self-driving cars, and target selection. Both technologies are close to complete.
The terrifying aspects of this development are more than just those to do with what the journal terms “the ethical debates” posed by technologies like LAWS. There also are political implications.
First, LAWS make it less of a commitment for a government to go to war. A steady stream of dead soldiers, their lives wasted in a foreign intervention, can start to turn a population against war. A steady stream of broken robots is unlikely to have that effect.
Second, the technology would make the state more stable in situations of upheaval. Mass armies often mutiny in times of rebellion. These robots, however, would know only how to kill. And kill they would, for whatever regime possessed them.
The weakness of the Nature article is that it promotes international law as a solution to prevent these killing machines being abused by nation states.
The problem, of course, is that international law has proven impotent in protecting vulnerable populations from imperialist slaughter.
Any treatment of dangerous new technologies like LAWS by international law possibly would be limited to restricting smaller powers from developing the technology openly, but leaving the floodgates open for imperialist powers to build and use these killing machines.