From The Terminator to D.A.R.Y.L, science-fiction writers have presented memorable, though very different, visions of “self-aware” robots. But now it’s being reported that science itself has taken a giant step toward creating such — though, it seems, there was a similar report in 2016. Whatever the state of research into creating self-aware machines, however, are we aware of the implications such developments have for man?
As to the robot in the news currently, it’s less like the Terminator and more like what was left of him after his first-film encounter with a hydraulic press. As the Sunday Express reports, “Engineers at Columbia University, in New York, have reached a pinnacle in robotics inventions, inventing a mechanical arm able to programme itself — even after it is [sic] malfunctioned. Professor Hod Lipson, who leads the Creative Machines lab, where the research was carried out, likened the robotic arm to how a ‘newborn child’ adapts to their [sic] environment and learns things on its own.”
“The group of scientists claimed this is the first time a robot has shown the ability to ‘imagine itself’ and work out its purpose, figuring out how to operate without inbuilt mechanics. In the study, published in the journal Science Robotics, Prof Lipson said: ‘This is perhaps what a newborn child does in its crib, as it learns what it is,’” the paper continued.
{modulepos inner_text_ad}
Moreover, “The robotic arm was programmed with no knowledge of physics, geometry or motor dynamics,” according to the Telegraph. Providing details about Mr. Arm’s accomplishments, BelfastLive writes:
US scientists gave it the ability to “imagine itself” using a process of self-simulation. The advance is said to be a step towards self-aware machines.
… At the start of the study, the robot had no idea what shape it was, whether a spider, a snake or an arm.
To begin with, it behaved like a “babbling infant”, moving randomly while attempting various tasks.
Within about a day of intensive “deep learning”, the robot built up an internal picture of its structure and abilities.
After 35 hours of training, the “self model” helped the robot grasp objects from specific locations and drop them in a receptacle with 100% accuracy.
Even when relying entirely on its internal self model — the machine’s “imagination” — the robot was able to complete the pick-and-place task with a 44% success rate.
PhD student Robert Kwiatkowski, another member of the team, said: “That’s like trying to pick up a glass of water with your eyes closed, a process difficult even for humans.”
Other tasks included writing text on a board using a marker.
To test whether the robot could detect damage to itself, the scientists replaced part of its body with a deformed version.
The machine was able to recognise the change and work around it, with little loss of performance.
Interestingly, though, this isn’t the first time there has been news of Professor Lipson making such a breakthrough. As an overstated Daily Signal headline read in 2016, “Professor proves science rules, creates self-aware robot.”
This “déjà vu all over again” most likely just reflects the media attitude that if it was a good headline in 2016, it’s a good headline today. This doesn’t diminish the research’s significance, however, because the implications of artificial intelligence (AI) are staggering.
For starters, the Signal tells us that “Lipson focuses his research on evolutionary robotics, a branch of robotics that uses processes inspired by biological evolution to ‘breed’ new robots, rather than design them manually.”
This certainly sounds fanciful. But as with polling, much depends here on phraseology. Asserting robots will one day “breed” evokes far more skepticism than simply stating that “self-replicating” robots may lie ahead. After all, we already have machines building other machines in factories (that’s how humans lose their jobs).
As for losing their sanity, many fear the threat they believe AI poses. Late physicist Stephen Hawking, for example, warned that AI could be the “worst event in the history of our civilization” and “could spell the end of the human race.” But should we really fear creating a monster, a Terminator-like “rise of the machines”?
Lipson himself doesn’t pooh-pooh this, saying we “cannot completely control” AI any more than we could a child we raise and teach, but who then leaves the nest and exercises his own free will. Yet there’s also a deeper threat here.
Claiming that philosophers haven’t really unraveled what it means to be human, Lipson stated in a 2015 RT interview that one of his goals is to create in a robot “human-level intelligence, consciousness, if you like, self-awareness — these things that we think are very-very human” — for the purposes of understanding “what it means to be human.”
So if the prospect of being terminated by psychopathic AI isn’t scary enough, now some would say there’s a threat to our very conception of ourselves, one that can cause “social structure” to “begin to unravel,” as Lipson puts it.
Yet this actually is already occurring, and this brings us to a general divide here between atheists and theists. Secularists (who assume we have no souls) who’ve thought matters through will essentially conclude that “since man is just some pounds of chemicals and water, an organic robot,” Lipson’s dream is entirely logical: It’s just a matter of making “artificial” robots — better ones.
Theists may respond that man is unique, that self-awareness, consciousness, could never be achieved in a soulless entity. Yet as a man of faith, let me try to lend perspective.
Beginning with the simpler matter of capacities, note that a tiger is stronger (even pound for pound), faster, and generally far more physically formidable than a man, but Judeo-Christian theists don’t believe it has a soul. The best computers can now beat the best humans at chess — with their first victory coming in 1997 with IBM’s Deep Blue besting Gary Kasparov — and computers certainly don’t have souls (though, maddeningly, they can seem to have a spirit of mischief).
The point is that there’s every reason to believe that, given enough time and technological advancement, robots can best man in every physical and intellectual endeavor. This could yield a world where machines do all the work and humans are left to play and luxuriate, which presents its own problems — “An idle mind is the Devil’s workshop.” Or it could lead to something else quite devilish.
Remember that conceptualized here isn’t just Robbie the Robot or Gort. On the horizon lie advances in not just robotics, but nanotechnology and artificial DNA that have staggering implications. What if future self-replicating, self-aware robots could be programmed with not only the capacity to learn but had, or could develop, ambitions, a will, and emotions (Lipson says the latter are “possible in sort of more technical ways”) Whether these things would be precisely what man possesses or computer analogs — and many would argue there’d be no difference — isn’t the point.
It’s that if these automatons surpassed us in worldly capacities and had or developed autonomy, well, “Biological species almost never survive encounters with superior competitors.”
Or, at least, so said Sun Microsystems cofounder Bill Joy in his famous essay “Why the Future Doesn’t Need Us.” Writing in 2000, Joy worried — and noted how many other scientists agreed — that “we may well not survive the encounter with the superior robot species,” as he put it. The kicker?
With the rate of technological advancement, he predicted that we’d likely have the capacity to create such mechanical menaces by 2030.
That time frame may now seem a tad unrealistic, as Lipson and other AI experts apparently agree, and, in truth, futurists have a terribly poor track record. Then there’s Oxford University computer science professor Dr. Nigel Shadbolt, who rejects the doom-and-gloom scenarios and says, “It is not artificial intelligence that should terrify you; it is natural stupidity.”
But that may be the point. Professor Lipson told RT that an AI “robot, like a child, will keep on learning on its own, and if it was raised in a good way, then I think things will go well.” That’s a big “if” for people in any age and brings us to what’s perhaps the crux of the problem. With our morals decaying rapidly as our scientific knowledge explodes, we face the prospect of being people with low-tech virtue trying to manage a hi-tech world.
It would be poetic justice if we sons of creation forgot God, tried to play god, and then were taught we were anything but by our own creation.
Photo: PhonlamaiPhoto/iStock/Getty Images Plus