Are “Killer Robots” in Our Future?
Article audio sponsored by The John Birch Society

“‘Killer robots’ to be debated at UN,” reads the headline. For the anti-UN folks, no, this doesn’t mean that UN officials, becoming overly abrasive during argumentation, could spark a reaction that would move us a step further away from world government. Killer robots don’t currently exist, but whether they should be developed and the implications of such technology are the subjects of a debate at the UN Convention on Certain Conventional Weapons (CCW), taking place Tuesday through Thursday this week.

The term “killer robots” may conjure up images of the virtually indestructible killing machine in the film Terminator, and that is more or less what’s at issue. No, these robots wouldn’t be covered with imitation human flesh (at least not the early versions) or travel through time, but they would be lethal autonomous weapons systems that would choose targets in accordance with their programming. And, as with nuclear bombs, machine guns, and aircraft, they would be, as Brookings Institution warfare futurist Peter W. Singer put it, a battlefield “game changer.”

{modulepos inner_text_ad}

And these weapons systems aren’t as far off as some may think. As the BBC’s Jonathan Marcus wrote last year:

The era of drone wars is already upon us. The era of robot wars could be fast approaching.

Already there are unmanned aircraft demonstrators like the arrow-head shaped X-47B that can pretty-well fly a mission by itself with no involvement of a ground-based “pilot.”

There are missile systems like the Patriot that can identify and engage targets automatically.

And from here it is not such a jump to a fully-fledged armed robot warrior, a development with huge implications for the way we conduct and even conceive of war-fighting.

And, in fact, writes the Wall Street Journal, South Korea “already deploys semi-autonomous machine-gun robots outside its demilitarized zone with North Korea.”

It is this autonomous quality that frightens many. While a machine that can make instantaneous decisions unclouded by emotion and unfettered by human indecision can greatly enhance efficiency, some echo the sentiments of the International Committee of the Red Cross’ Kathleen Lawand, who is quoted in a report as saying, “The central issue is the potential absence of human control over the critical functions of identifying and attacking targets, including human targets. There is a sense of deep discomfort with the idea of allowing machines to make life-and-death decisions on the battlefield with little or no human involvement.”

As a result, some want all autonomous killing capacity banned. One such person is University of Sheffield robotics expert Professor Noel Sharkey, chairman of the International Committee for Robot Arms Control, who is debating this week at the CCW. He wrote in The Guardian in 2007:

This is dangerous new territory for warfare, yet there are no new ethical codes or guidelines in place. I have worked in artificial intelligence for decades, and the idea of a robot making decisions about human termination is terrifying. Policymakers seem to have an understanding of AI that lies in the realms of science fiction and myth. A recent US navy document suggests that the critical issue is for autonomous systems to be able to identify the legality of targets. Then their answer to the ethical problems is simply, “Let men target men” and “Let machines target other machines”. In reality, a robot could not pinpoint a weapon without pinpointing the person using it or even discriminate between weapons and non-weapons. I can imagine a little girl being zapped because she points her ice cream at a robot to share. Or a robot could be tricked into killing innocent civilians.

Professor Sharkey fears that with “prices falling and technology becoming easier, we may soon see a robot arms race that will be difficult to stop.”

Yet others say that not only is this fear-mongering, the question with robots is the same as with nuclear weapons: How do you put the genie back in the bottle? Among those taking this position is Georgia Institute of Technology’s Professor Ronald Arkin, another roboticist debating at the CCW. He says that while he supports a moratorium on military use of robots until necessary safeguards are built into the systems, he doesn’t advocate a ban because it’s unrealistic. Relating Arkin’s position at The Chronicle of Higher Education, Don Troop wrote in 2012:

“I am not a proponent of lethal autonomous systems,” he says in the weary tone of a man who has heard the accusation before. “I am a proponent of when they arrive into the battle space, which I feel they inevitably will, that they arrive in a controlled and guided manner. Someone has to take responsibility for making sure that these systems … work properly. I am not like my critics, who throw up their arms and cry, ‘Frankenstein! Frankenstein!'”

In fact, Arkin believes that robots could conceivably make better ethical battlefield decisions than soldiers, on average. Calling an Apache helicopter video that he believed showed U.S. servicemen in Iraq violating the rules of war by killing a wounded man (from afar) his “tipping point,” he sought and “won a three-year grant from the U.S. Army Research Office for a project with a stated goal of producing ‘an artificial conscience’ to guide robots in the battlefield independent of human control. The project resulted in a decision-making architecture that Mr. Arkin says could potentially lead to ethically superior robotic warriors within as few as 10 to 20 years, assuming the program is given full financial support,” wrote Troop.

Moreover, Arkin points out that robots could save the lives of both soldiers and civilians. As Troop also wrote, “Rather than risking one’s own life to protect noncombatants who may or may not be behind a door, Mr. Arkin says, a soldier ‘might have a propensity to roll a grenade through there first … and there may be women and children in that room.’ A robot could enter the room and gauge the level of threat from up close, eliminating the risk to a soldier.”

Whatever the risks and rewards, what seems certain is that with the apple long ago having been bitten, a return to Eden will elude us. Perhaps all we can do now is hope and pray that battlefield robots will be, at least most of the time, more moderator than Terminator.