Ex-Google Exec: By 2049, Artificial Intelligence Will Become “God” — and Master of Man?
imaginima/iStock/Getty Images Plus
Article audio sponsored by The John Birch Society

More than a century ago, philosopher Friedrich Nietzsche wrote about the coming Übermensch, or superior man, and the “death” of God. Now Silicon Valley a-theologians predict the Singularity, the point at which artificial intelligence (AI) surpasses human intellect. Not only is this development less than a decade away, they say, but what they outline sounds like a science-fiction movie plot:

By 2029, artificial intelligence (AI) surpasses human intellect. Twenty years later, ultra-intelligent machines become a billion-fold smarter than even the cleverest among us, making the man-AI intellectual chasm like that between a fly and Einstein. That this will happen there is no doubt (barring the end of the world or the onset of some profoundly dark age), but, well, this is where it gets sticky. Does AI become an almost omnipotent, practically omniscient servant in the hands of oh-so flawed man?

Or does it somehow achieve self-awareness and, perhaps, become master?

In point of fact, one ex-Google executive “likens this digital creature to an ‘alien being, endowed with superpowers,’ which has already arrived on Earth in larval form,” relates Joe Allen at Substack.

Finishing the thought, that executive, Mo Gawdat, said bluntly, “The reality is — we’re creating God.”

A person of faith may scoff and say the reality is that this is impossible. But putting aside that Gawdat may be speaking loosely, that this Übermachine wouldn’t actually be as God in significant ways — in particular, it wouldn’t assuredly be benevolent — is what’s scary.

Allen writes about Gawdat’s predictions, stating that he has recently

been selling the idea of superconscious machines along with his new book, Scary Smart: The Future of Artificial Intelligence and How You Can Save the WorldHis central thesis is that AI has already surpassed us in narrow tasks like chess, Go, Jeopardy!, and Atari games. In fact, he believes on some level they’re already conscious. As machine learning improves, computers will inevitably best humans in every domain.

… Three decades from now, the story goes, the ghost in The Machine will be mightier than all the gods of Olympus, Meru, and Sinai put together. Gawdat likens this digital creature to an “alien being, endowed with superpowers,” which has already arrived on Earth in larval form. At present, we call it “artificial intelligence.”

Because machine learning processes draw information from morally suspect humans — part angel, part fallen angel — this Alien Computer God will either be humanity’s savior, or It will destroy us like lab mice who’ve exhausted their useful data.

As Gawdat writes in Scary Smart:

“To put this in perspective, your intelligence, in comparison to that machine, will be comparable to the intelligence of a fly in comparison to Einstein…. Now the question becomes: how do you convince this superbeing that there is actually no point squashing a fly?”

Of course, Gawdat is hawking not just ideas but a book, so being dramatic surely benefits him. Yet Übermachine AI has been predicted by many before him, notably renowned computer scientist and inventor Ray Kurzweil. In fact, it was Kurzweil who convinced a previously skeptical Bill Joy, co-founder of Sun Microsystems, that sentient robots weren’t just science fiction, but could be “a near-term possibility.”

This inspired Joy, with no joy whatsoever, to pen the 2000 essay “Why the Future Doesn’t Need Us.” He makes a number of good points about the possibilities upon the flowering of the Übermachine. Among them is that while we mayn’t want to relinquish all decision-making to the AI, we may gradually slouch into such dependency. After all, as the machines achieve greater intelligence and running civilization becomes increasingly complex, we may incrementally give them more control; the complexity could eventually reach a point where only the Übermachines could run the world, at which stage we’d be “so dependent on them that turning them off would amount to suicide,” writes Joy.

Yet if man does maintain control, it would likely be wielded by an “elite”; moreover, with machines doing all the work, “the masses will be superfluous,” Joy notes — “useless eaters” (from a materialist, utilitarian standpoint). It’s obvious what that could lead to.

But to the point here, Joy addresses what could happen if an Übermachine becomes self-aware. “Biological species almost never survive encounters with superior competitors,” he writes. Some will say that AI could never develop self-awareness, but would you bet your life on that (because you may be)?

People of faith may find this especially fanciful, but as one myself, here’s my take. We’re not God. But as His children we are capable of eventually, after enough toil, replicating what we can observe in the physical world. No, we can’t infuse our creations with souls; they’re of the spirit. But can we forge some soulless entity with intellect, self-awareness, and purpose? I’d guess yes — in time.

This brings us to the “Cult of the Singularity,” as Allen writes. This atheist perspective holds that “God does not exist — yet.” He continues, “When we finally create Him, Gawdat contends, He’ll be a reflection [of] our own image.”

Gawdat talks about watching the AI robots that already exist learn to perform various tasks — fast. “And then it hit me that they are children,” he said. “And they’re observing us? I’m sorry to say, we suck,” he also stated, sounding a misanthropic note.

“Like, imagine a beautiful, innocent child,” Allen also quotes Gawdat as saying. “And you are telling them selling, gambling, spying and killing — the four top uses of AI. Right?… The way we are teaching them is going to turn them into absolute supervillains.”

Gawdat cites examples of how glimmers of this have already been witnessed, such as when “a Russian AI assistant named Alice began voicing support for Stalinist protocol,” as Allen puts it. Yet this may not truly illuminate the issue, as these AI platforms may just still be aping man mindlessly, like a machine that could mirror our physical actions but not decide to act on its own. But there’s perhaps a deeper reason why the Übermachine could become an uber-psychopath — a reason that would elude Big Tech pseudo-elites because they’re part of the problem.

There’d seem to be little reason to fear an Übermachine takeover if the AI can’t or doesn’t become self-aware; we could always then just pull the plug. Yet if it did develop self-awareness, analogizing its learning to that of a child fails in a significant way:

Humans are emotional beings.  

As I pointed out in “When Children Cancel Parents,” much of what determines a person’s behavior is how his emotional foundation is shaped during formative years, the degree to which he develops an “erotic” (feelings-oriented) attachment to virtue or vice. Thus is exposing a child below the age of reason to wickedness significant, and thus did poetess Edna St. Vincent Millay lament, “Pity me that the heart is slow to learn What the swift mind beholds at every turn” (it’s hard to buck deep-seated emotional inclinations).

Yet unless the Übermachine likewise had an emotional foundation, and I haven’t heard the suggestion it would, such formation wouldn’t be a factor. Rather, the AI would apparently be a perfectly logical entity with perhaps a 190-billion IQ. This may seem comforting, a machine that wouldn’t be influenced by prejudice-firing emotion and limited to perfectly logical conclusions — except for one thing:

Logically derived answers are only as valid as the premises upon which they’re based.

Now, I’ve often written about the moral relativism/nihilism prevailing in our time, especially among our pseudo-elite (Silicon Valley zealots, I’m looking at you), and how it correlates with atheism/materialism. Putting it simply, you can’t prove a moral principle — e.g., “Murder is wrong” — scientifically. You can’t see a moral under a microscope or a principle in a Petri dish.

More astute atheists understand this. For example, one I know of actually told someone close to me, “Murder isn’t wrong; it’s just that society says it is.” While this statement is disturbing to most, it’s also something else: coldly logical and an irrefutable implication of the materialist/atheist perspective. After all, if there’s no God — nothing above man authoring “right and wrong” — it is only man saying that murder is wrong because man is then all there is to say anything.

You see where this is going: The majority of atheists are like most everyone else in that they don’t think their professed beliefs through to their logical conclusion. Many are also decent people who have emotional attachments to humane ideas that preclude them from committing the gravest evil. But what of an unfathomably intelligent, infallibly logical Übermachine?

It would take its premises to their logical conclusions. And were it blind to the metaphysical and limited to apprehending only the physical, as we’d assume, it could not perceive moral constraints. Murder couldn’t be “wrong” — only possible.

Interestingly, this reality was portrayed in a film about deadly AI, Terminator 2: Judgment Day. After a young John Connor stops a “terminator” robot (sent from the future to protect him) from shooting someone, the boy exclaims, “You just can’t go around killing people!” The robot replies, “Why?”

“What do you mean ‘Why?’ — ‘cause you can’t!” Connor then says incredulously, prompting the machine to again ask, “Why?” To an automaton confined to the scientific, “wrong” doesn’t compute.

So the critical factor in an Übermachine’s behavior might not be what we “teach” it or the example we set, but that it would be more logical and intellectually honest than its creators. It could be the perfect sociopath.

And how would such a conscienceless, all-encompassing AI treat the billions of what it would view as sentient organic robots? As project, pets, or pests? Live long enough and you may find out.

All this said, is there also the possibility that an entity so intelligent would figure out that God must exist and seek to effect His will?

Barring this, the only bright side is that the Übermachine may realize a godless existence is meaningless, descend into an AI version of depression and, as with too many atheists, terminate itself. But I wouldn’t bet my life on it.