It’s easy to call warnings about artificial intelligence (AI) “sensationalism.” (So many things today are, after all.) In fact, when I told a “terrified” ex-AI researcher on X yesterday that I was writing a story about the alarm he sounded and had questions for him, another respondent — an author, it turns out — wrote of me, “Typical journo… If it bleeds, it leads.” Perhaps there’s no worse insult than “typical journo,” a status below used-car salesman and personal-injury lawyer and just above politician. (Maybe.) But I think my psyche will survive. Will we, however, survive the coming AI revolution?
No, that question isn’t sensationalistic, at least not according to to the man I approached, ex-OpenAI safety researcher Steven Adler. Explaining Monday why he quit his OpenAI position in November, he posted the following quite alarming message on X:
Honestly I’m pretty terrified by the pace of AI development these days. When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point? [Tweet below.]
The “AGI” acronym Adler uses stands for “artificial general intelligence.” This is, IBM’s website explains,
a hypothetical stage in the development of machine learning (ML) in which an artificial intelligence (AI) system can match or exceed the cognitive abilities of human beings across any task. It represents the fundamental, abstract goal of AI development: the artificial replication of human intelligence in a machine or software.
In other words, this sounds much like the thinking machines so often portrayed, ominously, in science fiction. “HAL” from 2001: A Space Odyssey comes to mind.
Just Science Fiction?
As for the “alignment” Adler spoke of, that concerns preventing the ominousness. As TechTarget informs:
AI alignment is a field of AI safety research that aims to ensure artificial intelligence systems achieve desired outcomes. AI alignment research keeps AI systems working for humans, no matter how powerful the technology becomes.
But how effective will this be? As Adler said, no “lab has a solution to AI alignment today. And the faster we race, the less likely that anyone finds one in time.” He’s not the first expert to warn of an AI-authored apocalypse, either.
For example, legendary late physicist Stephen Hawking sounded an alarm in 2014. “The development of full artificial intelligence,” he bluntly told the BBC, “could spell the end of the human race.”
Echoing this is Stuart Russell, a professor of computer science at the University of California Berkeley. He told the Financial Times, Newsweek reported Tuesday, that the
AGI race is a race towards the edge of a cliff…. Even the CEOs who are engaging in the race have stated that whoever wins has a significant probability of causing human extinction in the process because we have no idea how to control systems more intelligent than ourselves.
Then there were the comments made by former Google chief business officer Mo Gawdat in 2021. He likened “this digital [AGI] creature to an ‘alien being, endowed with superpowers,’ which has already arrived on Earth in larval form,” related Joe Allen at Substack at the time.
Finishing the thought, Gawdat said unabashedly, “The reality is — we’re creating God.”
Deity — or Devil?
Creating God is impossible, of course. Yet Gawdat isn’t alone in harboring these pseudo-deific AI visions. For example, Elon Musk revealed in 2023 that Google co-founder Larry Page aspires to create an AGI “digital god.”
Also on Page’s page is former Google engineer Anthony Lewandowski. As commentator Matt Rowe related in 2023, Lewandowski predicts that AGI will
have the entire internet at its disposal, much like our own nervous system, and all the phones, tablets, PCs, and other connected devices would, in effect, be its sense organs. The AI will see and hear everything and be everywhere at all times. Lewandowski argues that the only rational word to describe that is “god” — and he believes that humans will basically have to worship and pray to AI to influence its decision-making.
Impossible — or Probable?
On the other side, however, are those insisting AI will always just be a “correlation machine.” It will only ever be as good as the programming humans provide and ever subject to having its plug pulled. But is this short-sighted?
Consider: If something occurs in the physical world, we know it’s possible. Moreover, if we can observe it long enough, we can come to understand it. And can we not then also, eventually, replicate it?
Alright, well, humans — these beings with intellect and free will — “occur” in our world. So as technology advances and understanding of the brain increases, could we eventually create an artificial entity that also possessed intellect and free will? I suspect so.
This theory could be tenable, too, to both secularists and theists. With the former, their implied belief is that man is a mere organic robot, some pounds of chemicals and water. And then the idea that we could create inorganic robots that could mirror us shouldn’t seem far-fetched.
As for theists, remember that we’re not talking about replicating what cannot be observed in the material world: the soul. At issue is only replicating what can be. So unless one believes that the “mind,” which philosophers often consider a non-material entity separate from the brain, is required for intellect and free will, there may be little if any reason to doubt what’s at issue here: the sentient-machine thesis.
Inevitable?
Returning to Adler, I don’t know if he believes AGI can become “sentient.” He didn’t respond to my questions by publication time. (In fairness to him, he said on X that he was taking a break and could be on vacation.) But he apparently does worry that it can go rogue. And while he emphasizes the need for effective regulations, will this realistically happen? As Adler puts it, a given lab may truly want to develop AGI “responsibly.” Yet, “others can still cut corners to catch up, maybe disastrously.”
Moreover, it’s as with nuclear weapons. The United States could’ve unilaterally refused to develop them. But this wouldn’t have stopped the Soviets and, later, the Chinese, from doing so. And, really, having the soulless Beijing communists being AGI’s sole owners might be as ominous as the machine “owning” itself. So what can we conclude? Is there hope?
Well, sensationalistic or not, perhaps it’s only God who can save us from artificial super-intelligence enabled by human stupidity.