Scientist: Advanced Robots May Enslave Humans — Then Wipe Us Out
Article audio sponsored by The John Birch Society

While a new study shows that people are getting dumber, robots are getting smarter. In fact, says one expert, artificially intelligent automatons may reach a point where they will outsmart man, seize control, and then enslave humans before sending us the way of the dodo. Moreover, he says that some of his colleagues will actually welcome this mechanical domination.

Robots have figured prominently in the news in recent years, with stories about how they’ll supplant workers and even spouses, as some have actually advocated robot-human “marriages.” (Given how acrimonious divorces can be, I can’t think of anything that would make robots want to wipe us out more!) But then there’s the Terminator-like threat posed by artificial intelligence. As the Daily Mail reports:

Max Tegmark, a Swedish-American physicist and cosmologist … is a professor at the Massachusetts Institute of Technology and the scientific director of the Foundational Questions Institute.

During an almost 20 minute long presentation at the Ted 2018 conference, held in Vancouver, Canada, Professor Tegmark outlined the opportunities and threats he feels AI will bring.

‘One option my colleagues would like to do is build super intelligence, and keep it under human control like an enslaved dog,’ he said during a talk in April, which has just been published on the organisation’s website.

‘But you might worry that maybe we humans just aren’t smart enough to handle that much power.

‘Also aside from any moral qualms you might have about enslaving superior minds, you should be more worried that maybe the superintelligence could outsmart us.

‘They could break-out and take over.’

{modulepos inner_text_ad}

Some people aren’t bothered by this prospect, either. “Max revealed,” writes the Daily Star, which, oddly, appears to be on a first-name basis with the professor, that “many researchers would welcome this possibility, as long as they have been a part of it.”

“He added,” the paper continued, “‘I have colleagues who are fine with this, and it could even cause human extinction.”” This isn’t surprising, given scientific curiosity and man’s ego. Faithless materialists, these are people who’d find it gratifying if they could birth a “superior race” destined to rule the planet.

This is their attitude as long as, as Tegmark put it, “they feel the AIs are our worthy descendants, like children.” Again, no shock. From these scientists’ materialist perspective, man is, as I often point out, just some pounds of chemicals and water, an organic robot. So in their thinking, the extinction in question would merely be a natural evolution in which one population of robots is supplanted by another population of robots — a superior one.

Interestingly, note that this also reflects the attitude of pseudo-elitists content to replace Western populations with foreign ones via massive Third World (im)migration. Though these scientists go one better: They exhibit not just a lack of loyalty to a nation, but to the whole species.

Tegmark also asks, “But how would we know the AIs have adopted our best values?” This is a good question since we don’t know what these “values” are ourselves in this relativistic/nihilistic age. If we did, we’d speak of “virtues,” those good moral habits that are objectively correct and which have been clearly defined (honesty, prudence, temperance, faith, joy, diligence, etc.). “Values” is part of the atheistic lexicon, referring simply to what man happens to value at the moment; thus, without an objective yardstick (Truth) with which to judge values, we couldn’t even rightly say that any were better or worse — or “best.”

Tegmark is not the first to warn of the threat posed by AI. In 2015, for instance, late physicist Stephen Hawking told El Pais that AI would overtake man within a century. And echoing Tegmark, he warned that “we need to make sure the computers have goals aligned with ours.” How this could be accomplished if the robots truly were “intelligent,” however, is the question. Of course, truly dangerous (when misused) is the combination that makes man a reflection of God: intellect and free will.

Perhaps unlike his Terminator-dreaming colleagues, Tegmark struck a democratic note and also asked, “Shouldn’t those people who don’t want human extinction have a say in the matter too?” relates the Star.

But our say may be irrelevant. Adding important perspective, Tegmark also noted, “If we don’t go far beyond today’s technology, the question isn’t whether humanity is going to go extinct, [but] more whether we are going to get taken out by the next killer asteroid, supervolcano or some other problem” — such as gamma-ray bursts or roving black holes (it’s a big and dangerous universe out there).

But while End Times stories make for good headlines and great nightmares, the real point is that our own personal extinction — our death — comes all too soon. This is why faith, and hope in something beyond this transitory world, matter. And if there were no God? Then, well, those Dr. Frankenstein scientists would be correct: We couldn’t rightly say human domination is any better than robot domination. Survival of the fittest would be the rule — and may the best robot win.

Photo: tampatra/iStock/Getty Images Plus