Podcast: Play in new window | Download ()
Subscribe: Android | RSS | More
There has been much talk lately about robots taking jobs, but the even scarier prospect is robots taking lives. This is inevitable, too, says president of Microsoft, Brad Smith. In fact, he warned in a recent Telegraph interview that governments need to consider the ethical questions raised by the development of battlefield robots.
The term “killer robots” may conjure up images of the virtually indestructible automaton assassin in the film Terminator, and that is roughly what’s at issue. No, these robots wouldn’t be covered with imitation human flesh or travel through time, but they would be lethal autonomous weapons systems that would choose targets in accordance with their programming. And, as with nuclear bombs, machine guns, and aircraft, they would be, as Brookings Institution warfare futurist Peter W. Singer put it, a battlefield “game changer.”
{modulepos inner_text_ad}
As for Smith, he “said the rapidly advancing technology, in which flying, swimming or walking drones can be equipped with lethal weapons systems — missiles, bombs or guns — … ‘ultimately will spread… to many countries’,” reports the Telegraph. Of course, this is just stating the obvious. Whether machine guns in 1900 or nuclear devices today, nations always seek the latest and most effective weaponry.
Thus, the “US, China, Israel, South Korea, Russia and the UK are all developing weapon systems with a significant degree of autonomy in the critical functions of selecting and attacking targets,” the Telegraph continues.
“The technology is a growing focus for many militaries because replacing troops with machines can make the decision to go to war easier.”
This makes the technology an especially frightening prospect for nations not possessing it: An attacker equipped with it could launch a war while perhaps incurring few or no human casualties, while a target country without the technology would have to spill its blood.
This is why, while thousands “of Google employees have signed a pledge not to develop AI for use in weapons,” the Telegraph writes, explaining that there’s a backlash against this technology, their effort is childish and misguided. As with unilateral nuclear disarmament, it only ensures that you’ll one day be left at bad actors’ mercy.
Worried about mercilessness, “Smith said killer robots must ‘not be allowed to decide on their own to engage in combat and who to kill’” and that we “‘need more urgent action, and we need it in the form of a digital Geneva Convention, rules that will protect civilians and soldiers,’” the Telegraph relates.
“Speaking at the launch of his new book, Tools and Weapons, at the Microsoft store in London’s Oxford Circus, Smith said there was also a need for stricter international rules over the use of facial recognition technology and other emerging forms of artificial intelligence,” the paper continues.
Speaking of which, while “killer robots’” emergence is a given, a scarier prospect still is something that currently lies in science-fiction’s realm. Some scientists worry that — as portrayed in the film Terminator 2: Judgment Day with its “Skynet” — future artificially intelligent robots could develop self-awareness, consciousness, and rebel against humans.
For example, Subhash Kak, a professor of Electrical and Computer Engineering at Oklahoma University, told the Daily Star in 2017 that if “indeed machines become self-aware, they will be cunning and they will bide their time and choose the best moment to take over and enslave, if not kill, us.”
A more optimistic view, at least about the autonomous systems on the horizon, was presented by Georgia Institute of Technology roboticist Professor Ronald Arkin. While he said in 2012 that he’s “not a proponent of lethal autonomous systems,” their inevitability makes him a proponent of ensuring that their introduction is managed “in a controlled and guided manner.” In fact, he believes that robots could conceivably make better ethical battlefield decisions than soldiers, on average.
Yet another issue is a deeper problem that scientists, not generally given to thorough philosophical examination, don’t articulate: The now-common mechanistic view, especially in scientific circles, that we’re all just robots — organic ones comprising some pounds of chemicals and water. This idea is central to our moral decline. If it were true and there were no God, after all, there would be no right or wrong in a true sense, only the illusion of it that pseudo-intellectuals like calling “social constructs.” And if this is so, then nothing could be wrong with terminating an organic robot’s function.
Since we long ago left Eden, new technology would pose threats regardless. But this problem is exacerbated by the deadly mix that is our great technological rise combined with our profound moral decline. In fact, if you want truly sociopathic mechanical killers, just program the robots with the moral relativism/nihilism currently characterizing Western pseudo-elites.
Graphic: DigtialStorm/iStock/Getty Images Plus