AI Taking Over — What Happens to Humans?
XH4D/iStock/Getty Images Plus
Article audio sponsored by The John Birch Society

In the past few years, Artificial Intelligence (AI) has moved from the pages of science fiction to the front pages of the news. Proponents promise a utopian end to poverty, disease, war, and all the other ills of man. But for all of the promised benefits, the threat of AI gone awry should give everyone pause to consider whether the world is approaching the point of no return. Recent events show that AI can — by design or by flaw — create many of the very ills it promises to end.

For instance, AI is already replacing humans in the workplace. And the trend is expected to continue. In fact, in an article published Monday, ZDNET stated, “AI can automate the most annoying parts of your job,” and then asked, “But what if it can do your whole job?” That article goes on to say that AI could automate 25 percent of all jobs, and then breaks down the figures:

A global economics research report from Goldman Sachs says that AI could automate 25% of the entire labor market but can automate 46% of tasks in administrative jobs, 44% of legal jobs, and 37% of architecture and engineering professions. Of course, AI is the least threatening to labor-intensive careers like construction (6%), installation and repair (4%), and maintenance (1%).

And while it is possible that the changes to the economy would eventually be positive (with future generations moving to other fields not automated by AI), there are three caveats that place that possible hope in the context of real doubt.

First is the fact that as AI continues to gain prominence, it will continue to replace humans by doing jobs that today would seem unlikely. Consider that AI is already replacing humans in jobs that just a few years ago would have been difficult to imagine. For instance, BuzzFeed announced in late January that ChatGPT will begin creating some of its content. So, while humans will read BuzzFeed articles, some of those articles will be written by AI. And while it is easy to joke that the average reader will not likely notice the difference (besides fewer typos), it is certain that other media companies are taking notice and will follow suit. After all, BuzzFeed’s stocks climbed by 150 percent in the wake of the announcement.

The second caveat is closely related to the first. In previous iterations of the industrial revolution, humans went from low-paying, manual jobs to higher-paying, skilled jobs. Washing machines, electric ovens, and similar consumer products put household servants out of work, but many of those displaced workers went to work building washing machines, electric ovens, and the like. And subsequent generations never pursued servant’s jobs, moving on to more skilled employment. But this is that in reverse. Since “46% of tasks in administrative jobs, 44% of legal jobs, and 37% of architecture and engineering professions” will likely be replaced by AI in the near future, and “AI is the least threatening to labor-intensive careers like construction (6%), installation and repair (4%), and maintenance (1%),” administrative professionals, paralegals (and perhaps even attorneys), architects, and engineers may find themselves retooling downwards to manual labor jobs in construction, installation, and repair.

Finally, it is not inconceivable that with robotics and AI working hand-in-hand, even those manual labor jobs could eventually be threatened. The New American asked just a few weeks ago if AI would leave any work for humans to do. Given this trend, the answer is bleak. Warren G. Bennis — a pioneer in leadership studies — famously wrote, “The factory of the future will have only two employees: a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.” Bennis was not pessimistic enough to foresee that neither the man nor the dog would be necessary in the world of AI and robotics.

As the ZDNET article goes on to say:

AI’s potential to displace 300 million jobs is a primary concern for workers and tech moguls alike. Last week, notable names in the industry, like Steve Wozniak, Rachel Bronson, and Elon Musk, co-signed an open letter to pause AI experiments. The letter comes out of fear that AI development is moving too quickly for humans and can topple our society as we know it.

And the economic dangers of AI pale in comparison to other dangers. Way back in 2021, The New American made readers aware that a former Google executive said that by 2049, AI will become “God” over man. And a more recent story of an AI chatbot convincing a Belgian man to commit suicide shows what can happen when man places his “faith” in an amoral bunch of computer code whose only objective is to keep the conversation moving at all costs.

Against this backdrop, consider the implications of Romanian Prime Minister Nicolae Ciucă adding an AI advisor to his Cabinet. Commenting on his decision, Ciucă said, “I have the conviction that the use of AI should not be an option but an obligation to make better informed decisions.”

And Monday, a computer programmer and senior contributing editor to ZDNET published and article detailing how he used ChatGPT to debug computer code he had written. For those who don’t understand the process, computer code is written by humans in languages that can be understood by computers. But code almost always has flaws — bugs — that either prevent the code from running or cause unintended consequences. Programmers often spend more time debugging code than writing it. So, this programmer fed his code through AI and asked it to handle the debugging.

And it did.

The implications are huge. It means that ChatGPT understood his request, understood the programming language he used, researched the problems in his code, and fixed it. There is not much of a leap from that to AI writing code. It could mean that as AI advances, it could write another AI. And lest this writer seem alarmist, the programmer who wrote that article seems to share those and other concerns. He wrote:

And here’s where I take this conversation to a very dark place.

Imagine that you can ask ChatGPT to look at your Github repository for a given project and have it find and fix bugs. One way could be for it to present each bug it finds to you for approval, so you can make the fixes.

But what about the situation where you ask ChatGPT to just fix the bugs, and you let it do so without bothering to look at all the code yourself? Could it embed something nasty in your code?

And what about the situation where an incredibly capable AI has access to almost all the world’s code in Github repositories? What could it hide in all that code? What nefarious evil could that AI do to the world’s infrastructure if it can access all our code?

Let’s play a simple thought game. What if the AI was given Asimov’s first rule as a key instruction. That’s a “robot shall not harm a human, or by inaction allow a human to come to harm.” Could it not decide that all our infrastructure was causing us harm? By having access to all our code, it could simply decide to save us from ourselves by inserting back doors that allowed it to, say, shut off the power grid, ground planes, and gridlock highways.

To put in the for-what-it’s-worth column, that hypothesis is likely not dark enough. Given that an AI chatbot convinced a human to kill himself, a former Google executive boasts of creating an AI “God,” and that a government leader is under the “conviction” that leading by taking counsel from AI is “an obligation to make better informed decisions,” shutting off power grids, grounding planes, and gridlocking highways are low on the list of the evil things AI could throw at the human race.

AI stands as evidence that just because humans can do something, it does not necessarily follow that they should.