Humanity has the tendency to not want to acknowledge a problem until it’s too late.
This is especially true with things of our creation. Like Dr. Frankenstein in the original Mary Shelley novel, we are so often far too enthralled in the rush of “playing God” that we remain blind to the consequences until the hideous reality hits us in the face — and by then, it’s too late.
Will this be the case with artificial intelligence?
At last, some voices in the corridors of government are beginning to acknowledge that the rapid growth of AI, while creating many potential net benefits for humanity, can also come with many dangers that might prove catastrophic if not accounted for early.
“You’d be surprised how much time I spend explaining to my colleagues that the chief dangers of AI will not come from evil robots with red lasers coming out of their eyes.”
Those are the words of Representative Jay Obernolte (R-Calif.), the only member of Congress with a master’s degree in artificial intelligence, who spoke with The New York Times about the potential dangers of AI and the need for lawmakers to take action.
Obernolte further told the outlet: “Before regulation, there needs to be agreement on what the dangers are, and that requires a deep understanding of what AI is.”
But the California Republican laments that his colleagues have not arrived at that agreement.
Politicians on Capitol Hill have been quick to point out their own knowledge deficiencies when it comes to artificial intelligence. As an NBC News report from last month noted, Senate Majority Whip Dick Durbin (D-Ill.) called it “very worrisome” that he’s “got a lot to learn about what’s going on.”
Senator Richard Blumenthal (D-Conn.), who sits on the Commerce and Science Committee, called AI “new terrain and uncharted territory.”
And John Cornyn (R-Texas), one of the most respected senators in GOP establishment circles, admitted that he has only an “elementary understanding” of artificial intelligence.
As NBC News noted, an AI “arms race” has begun:
Now, artificial intelligence has burst on the scene, threatening to disrupt the American education system and economy. After last fall’s surprise launch of OpenAI’s ChatGPT, millions of curious U.S. users experimented with the budding technology, asking the chatbot to write poetry, rap songs, recipes, résumés, essays, computer code and marketing plans, as well as take an MBA exam and offer therapy advice.
Seeing the unlimited potential, ChatGPT has spurred what some technology watchers call an “AI arms race.” Microsoft just invested $10 billion in OpenAI. Alphabet, the parent company of Google, and the Chinese search giant Baidu are rushing out their own chatbot competitors. And a phalanx of new startups, including Lensa, is coming on the market, allowing users to create hundreds of AI-generated art pieces or images with the click of a button.
The question now being presented is: What approach will the federal government take? Will it largely step back as it has with Big Tech? Or will it intervene early on while AI’s exponential capabilities are still nascent?
Rep. Ted Lieu (D-Calif.), who has a degree in computer science, recently used AI to write a resolution calling for the congressional regulation of AI.
While Lieu believes AI will eliminate jobs, he thinks it will also create news jobs. “Artificial intelligence to me is like the steam engine right now, which was really disruptive to society,” Lieu told NBC. “And in a few years, it’s going to be a rocket engine with a personality, and we need to be prepared for enormous disruptions that society is going to experience.”
Recently, The New American reported that Romanian Prime Minister Nicolae Ciucă has added an artificial intelligence advisor to his Cabinet. The AI advisor, Ion, is touted as the first of its kind. Its role within the government is to analyze social media networks in order to inform policymakers “in real time of Romanians’ proposals and wishes,” the prime minister said. Romanians can provide Ion with their opinions via Twitter, a dedicated website, and at some in-person locations.
Another important development is the use of artificial intelligence for military purposes. In one notable example, an AI-powered drone aircraft bested a human-controlled aircraft in a dogfight organized by Chinese military researchers.
The concerns regarding artificial intelligence involve not only the inescapable fact that it will replace humans in the workforce, but that it will develop sentience and eventually break free from human control — and perhaps even become the one to control humanity.
Human beings are driven by convenience, and it is our tendency to do that which is convenient even when it is not what is best for ourselves or for society in the long-term.
Thus, although it will be increasingly tempting to replace human beings with AI in more and more areas of life, it may be wise to place restrictions on this in areas that can be publicly debated — a ban on AI assuming governmental decision-making roles could be a good place to start.