Will killer robots act like men or women? It matters.

South Korea is poised on the brink of ultra-modern warfare, manufacturing and selling automated turret guns that fire 50 calibre bullets, capable of destroying targets up to 4 km away. Engineer Park Jungsuk, who works in the Robotic Surveillance Division of the gun manufacturer explains that the turret was originally designed to fire automatically, without any human intervention of any kind, ‘[b] ut all of our customers asked for safeguards to be implemented. Technologically it wasn’t a problem for us. But they were concerned the gun might make a mistake.’

What Park describes are known as Lethal Autonomous Weapons Systems (LAWS), or robots that can make decisions about whom to kill or what to destroy independent of any human action. These machines are increasingly being used to replace human soldiers and to carry out dangerous work such as bomb disposal, opening doors and laying optical fibers in combat zones. While these autonomous drones have been used for years, to date, no country has acknowledged the use of completely autonomous, lethal robots that make decisions independently of human direction.

The Super aEgis II automated turret gun makes it clear that the technology to do so exists, but there is considerable concern that the gun ‘might make a mistake’. The difficulty will be in programming the machines to make the correct decisions: the machine will need to be moral.

Feminist scholar Carol Gilligan, author of In A Different Voice: Women’s Conceptions of Self and Morality posits that men and women take profoundly different approaches to morality. Women, she argues, make moral decisions contextually and what counts as a moral decision depends on the impact of that decision on those around her; hence, completely opposite courses of action can both be moral, depending on the context in which they occur. Men, by way of contrast, make moral decisions universally and what counts as a moral decision is whether that decision would be true in all similar cases, for all actors. Universal morality forms the basis of our justice system.

When it comes to the creation of LAWS, how the drone thinks, contextually (like a woman) or universally (like a man) will have a direct impact on how likely a drone is to ‘make a mistake’. One of the great difficulties programmers at Google experience with self-driving cars is how they behave at four way stops. Actual drivers rarely follow the strict rules of the road, and cars must intuit what the local rules are for four way stops. It turns out that behaving aggressively is the correct strategy. It is tempting to think of the rules of the road as being universal, while the actual behavior is contextual, but in fact, the rules LAWS need to be concerned about are discovering the correct strategies and deploying them universally. If behaving aggressively at four way stops is the correct strategy for safely clearing the intersection, then it is the correct strategy for every four way stop. It is a universal strategy that responds to contextual differences.

Whether a drone learns to think contextually or universally, or some combination of both, it is virtually impossible that a drone will never mistakenly kill or destroy a friendly target. Universal morality is likely to reduce these accidents, as continuously changing contexts that permit actions to be both moral and immoral at the same time offer that much more opportunity to make mistakes.

Killer robots need to think like men.

 

[Ed. note: This post originally appeared at the Examiner and is reprinted here with permission]

Recommended Content