“Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern Time, August 29th. In a panic, they [humans] try to pull the plug.”
“…Skynet fights back” — with extreme prejudice.
The above dialogue, from the 1991 film Terminator 2: Judgment Day, reflects a common science fiction theme: the artificially intelligent machine that develops consciousness — but without a conscience — and tries to wipe out humanity. According to some scientists, however, this fiction could become fact.
{modulepos inner_text_ad}
The development of “killer robots” that could be misused already appears a given, but now experts speak of “the possibility of AI developing consciousness, which some warn could be used by machines to rebel against humans and kill us,” writes the Daily Star.
Subhash Kak, a professor of Electrical and Computer Engineering at Oklahoma University, told the paper that if “indeed machines become self-aware, they will be cunning and they will bide their time and choose the best moment to take over and enslave, if not kill, us.”
The Star continued, “His comments come after a debate tore through the science community about what defines human consciousness, and whether or not this can ever be achieved by robots.”
Yet it’s unlikely slavery would be in the cards. As a commenter under the Star piece put it, “It doesn’t take the smartest human to realize that if ‘conscious’ AI robots wanted slaves, they would build ‘less-conscious’ robots, rather than rely on such an ineffective and unpredictable set of tools as the human race. They would simply eliminate all humans. (After all, isn’t that what WE’RE doing, replacing humans with automatons to do the same work better, cheaper, more efficiently?)”
While Kak points out that he doesn’t actually believe robots can develop self-awareness because of the uniqueness of man’s consciousness, he does warn that there’d likely be serious consequences if they did. In essence, such a development would confront us with entities vastly stronger, sturdier and more intelligent than ourselves, but which presumably would be conscienceless.
Of course, with this topic lending itself to humor, we could say “that when Conscious killer robots are outlawed, only outlaws with have Conscious killer robots,” as one Star commenter quipped. But Kak points out that most of his colleagues consider this no laughing matter, as “the majority of scientists and physicists do believe the terrifying prospect of a robot takeover will become a very real threat,” writes the Star. This is because, said Kak, “most computer scientists … think there is nothing to consciousness but computation.”
This is a very common view now among both real scientists and social ones, and it’s not in the least surprising. After all, they believe consciousness has already been achieved by robots: humans.
This belief is an outgrowth of atheism. As I often point out, if we’re merely cosmic-accident-born material beings bereft of souls, we’re then just some pounds of chemicals and water. We’re just an interesting arrangement of atoms — organic robots.
One man holding this view is cognitive scientist Daniel Dennett, who “believes our brains are machines, made of billions of tiny ‘robots’ — our neurons, or brain cells,” wrote BBC News in April. “Our minds are made of molecular machines…. And if you find this depressing then you lack imagination, says Dennett,” the BBC continued.
Elaborating, the BBC writes that
for Daniel Dennett, consciousness is no more real than the screen on your laptop or your phone.
The geeks who make electronic devices call what we see on our screens the “user illusion.” It’s a bit patronising, perhaps, but they’ve got a point.
Pressing icons on our phones makes us feel in control. We feel in charge of the hardware inside. But what we do with our fingers on our phones is a rather pathetic contribution to the sum total of phone activity. And, of course, it tells us absolutely nothing about how they work.
Human consciousness is the same, says Dennett. “It’s the brain’s ‘user illusion’ of itself,” he says.
It follows from this perspective that since the “accidentally formed” robots called humans could develop consciousness, so can robots created by humans. Hence the perils of godlessness.
Speaking of which, assuming for argument’s sake that conscious robots could become reality, what we should truly fear is their inculcation with their creators’ atheistic world view. After all, what could be immoral about altering an “organic robot’s” software (social engineering) or hardware (genetic engineering)? To the point here, what could be wrong with terminating an organic robot’s function? A conscious robot adopting Dennett’s mindset — and taking it to its logical conclusion (and robots are nothing if not logical) — might not have a reason to kill us. But it sure wouldn’t have a reason not to.
Note, too, that atheism correlates with the notion that something else is also illusion: right and wrong (as I explained here). After all, if Greek philosopher Protagoras was correct and “Man is the measure of all things,” if human “opinion” is all there is and morality is not a transcendent reality, then everything is perspective. It really is “Whatever works for you” and “If it feels good, do it.” And then as an atheistic man I once knew casually put it, “Murder’s not wrong — it’s just that society says it is.” And what robot will worry about society?
Yet more than conscious robots, we should fear people who believe we’re just conscious robots and who not only will be programming our latest technology, but also the minds of our children.
Graphic: DigtialStorm/iStock/Getty Images Plus