Will It Spell Man’s Doom When AI Starts Creating AI?
WhataWin/iStock/Getty Images Plus
Article audio sponsored by The John Birch Society

“Biological species almost never survive encounters with superior competitors.” So wrote Bill Joy, co-founder of Sun Microsystems, while warning 22 years back that artificial intelligence (AI) could eventually terminate man. Echoing this, late famed physicist Stephen Hawking stated in 2014 that development of full AI “could spell the end of the human race.” More recently, industrialist and visionary Elon Musk has called AI “our biggest existential threat” and said that “we are summoning the demon.”

Another man mindful of this potential threat is author and documentary filmmaker James Barrat. And because AI is being, and will continue to be, developed, he puts forth a proposition:

We must make sure it’s “friendly.”

Barrat made his comments on last Thursday’s edition of Tucker Carlson Today, snippets of which were played on Tucker Carlson Tonight the evening before. In the latter segment, host Tucker Carlson introduced the topic with an under-the-radar story.

“DeepMind is a British technology company that is owned, like most things, by Google,” said Carlson. “DeepMind says it’s developed an artificial intelligence that exceeds the capacity of human beings. Think about that. So the top research scientists there just said, quote, ‘The game is over.'”

As to this, Barrat has warned that AI could be like nuclear fission, far more powerful than its creators anticipate, except with the capacity to get out of control. Responding to a question about this, Barrat, who wrote the 2013 book Our Final Invention: Artificial Intelligence and the End of the Human Era, began by quoting Hawking. To wit:

“The problem with AI is, in the short term, who controls it; in the long term, can it be controlled at all?”

Shortly thereafter Barrat explained, reports the Washington Examiner:

“We have taught AI to master a lot of human abilities. AI will get better than us at artificial intelligence, research, and development. So what happens when AI gets better than us at making AI? See, our intelligence goes like this,” he said, waving his hand in an imaginary flat line.

“We are not getting — our IQ on average is not getting any higher. Machine intelligence is going like this,” Barrat added, gesturing with his hand an exponential curve. “And we’re in this curve somewhere. Once AI is programming AI and making better and faster AI, that idea is called the intelligence explosion. Then AI sets the pace of intelligence growth, not humans.”

Man “is not the top end of intelligence; we have no reason to think we are,” Barrat also stated. “So it’s … the way we’re headed — we will make machines that are much much smarter than we are now. We have to make them friendly” (video below).

Carlson concluded the segment by asking, “How do you make machines friendly, especially when they’re smarter than you?” For sure, man’s track record isn’t exactly stellar: We try making our children friendly and virtuous, after all — but as our fallen world attests, we often fail miserably.

Whatever the case, Barrat says that we don’t have much time left to take precautions. His concerns were encapsulated in a prediction made last year holding that by 2029, AI will surpass human intellect. And twenty years later, ultra-intelligent machines will become a billion-fold smarter than even the cleverest among us, making the man-AI intellectual chasm like that between a fly and Einstein.

That this will happen there is no doubt (barring the end of the world or the onset of some profoundly dark age), but, well, this is where it gets sticky. Does AI become an almost omnipotent, practically omniscient servant in the hands of oh-so-flawed man?

Or does it somehow achieve self-awareness and, perhaps, become master?

The man making the above prediction, an ex-Google executive named Mo Gawdat, perhaps knows where he stands. “The reality is,” he stated, “we’re creating God.”

While this is beyond our capacities, that we could create an Übermachine devil inspired the aforementioned Bill Joy to write the famous 2000 essay “Why the Future Doesn’t Need Us.” Joy addresses what could happen if an Übermachine becomes self-aware. “Biological species almost never survive encounters with superior competitors,” he writes. Of course, some will say that AI could never develop self-awareness, but would you bet your life on that (because you may)?

People of faith often find the prospect of self-aware machines especially fanciful. But as one myself (a man of faith, not a self-aware automaton), here’s my take: We’re not God. But as His children we are capable of eventually, after enough toil, replicating what we can observe in the physical world. No, we can’t infuse our creations with souls; they’re of the spirit. But can we forge some soulless entity with intellect, self-awareness, and purpose? I’d guess yes — in time.

This brings us to what’s truly frightening: Western civilization itself is becoming soulless in worldview. So while Musk calls AI our greatest “existential threat,” what actually may fit that bill is man’s inherent lack of virtue (fallen state) and Western civilization’s rapid moral decay, which cause us to misuse any and all technology, from cutting instruments to computing.

So while we can try to make AI “friendly,” the real question is: While its intellectual superiority could be incalculable, would, and could, it be morally superior to its creator?

For those interested, James Barrat’s full appearance on Tucker Carlson Today is here.