Man to Machine: “Human” Rights for Robots?
Article audio sponsored by The John Birch Society

Will we one day see the hashtag #RobotLivesMatter? While a ridiculous notion, it’s not all that far from what has, quite oxymoronically, been proposed by a European Parliament (EP) committee: human rights for robots.  

It’s all part of a “draft report,” writes RT.com, “approved by 17 votes to two and two abstentions by the European Parliament Committee on Legal Affairs, [which] proposes that ‘The most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations.’”

Not surprisingly, many observers are aghast. As National Review notes:

No! If robots have rights, the concept of human rights will cease to be objective, inherent, and inalienable, but rather, would become subjective and based on perceived individual capacities and capabilities.

Machines are not — and could never be — moral agents. They are mere things. Even the most sophisticated AI computer would merely be the totality of its programming, no matter how sophisticated and regardless of whether it was self-learning.

Some of the report’s principles are derived from fiction. For example, citing “Asimov’s Laws,” its framers write, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Other report suggestions make sense, such as proposing a “kill switch” mechanism to ensure robots going rogue can be easily shut down.

But then there’s the thinking gone rogue, with the report reading that “in the scenario where a robot can take autonomous decisions, the traditional rules will not suffice to activate a robot’s liability, since they would not make it possible to identify the party responsible for providing compensation and to require this party to make good the damage it has caused.”

{modulepos inner_text_ad}

Bizarre reasoning. Dogs certainly are less predictable than a programmed machine, and their owners generally don’t want them to bite anyone. Yet if a dog does, we don’t wonder if Mrs. Peabody or her pooch should be held accountable.

The framers explain the supposed complication, writing that “the more autonomous robots are, the less they can be considered simple tools in the hands of other actors.” Yet all this means is that they’re complex tools in the hands of other actors. They would still be mere things, with owners responsible for their operation.

Some of the concern appears to stem from the idea that a computer system could become complex enough to achieve “self-awareness.” Many nightmarish stories, notably the Terminator series (video below), have been written about self-aware machines deciding to become masters of man — or even wipe him out.

https://www.youtube.com/watch?v=v=ih_l0vBISOE

Transitioning from science-fiction horror to comedy, a proposal to afford robots rights invites humor. What will the advent of truly “sophisticated” robots mean? A man who puts a hammer through his computer screen just has a temper problem. But if he kicks a domestic-servant robot, will he have a legal problem? Will he be charged with domestic abuse and put in the pokey?

There also have been many stories lately about sex robots becoming common. Will this mean Islamic State-level sexual slavery for these automatons? Will there be a new “No means ‘No!’” campaign? Will they feel used?

Certain things are for certain: Any self-awareness would quickly be followed by special-interest-group status and the emergence of an Al Sharpton-cum-C3PO activist-bot. We’d hear admonishments to “check your human privilege” and a new definition of “mechanism.” “Why, how dare you say my feelings aren’t equal to yours, you mechanist hater!”

Oh, also, white males would still be the bad guys and, assuming the self-aware automatons had diminished free will and intellectual prowess, the Democrats would advocate their being granted voting rights.

Really, though, any serious effort to elevate the non-human to human status is no laughing matter. Mady Delvaux, the draft report’s rapporteur, warned of this herself, saying in an interview last week that a “‘robot is not a human and will never be human. A robot can show empathy but it cannot feel empathy….’  Delvaux also proposes a charter for designers that robots ‘should not be made to look emotionally dependent. You must never think that a robot is a human, that he loves you or he is sad,’” reported RT.

Why not? Well, just imagine robots one day became so lifelike in form, expression, and reaction that we’d have trouble discerning the difference between man and machine. If many people bonded emotionally with them, the human propensity to let the heart rule the head could create a strong impetus for “robot rights.” As stated in the 1985 film D.A.R.Y.L., the general feeling could be that “a machine becomes human when you can’t tell the difference anymore.”

Of course, knowing the difference between fact and truth informs that the “push to conjure [human] rights for non-human and undeserving entities — machine, animal, nature, etc. — is a serious symptom of societal decadence that should be mocked and rejected out of hand,” to quote National Review.  But will it?

It’s as if the robot-rights theorists believe that while automatons are now in an infantile state, they can “grow up.” Consider:  As a child matures and eventually becomes emancipated, legal responsibility for his actions transfers from his parents to him.  Likewise, the notion here is that a robot could become complex enough so that responsibility could shift from owner to the owned.

The difference is that a child is never rightly owned, except by God, because he isn’t a thing. The babe isn’t afforded adult rights and responsibilities as he can’t yet negotiate adulthood, but this doesn’t mean his rights are “subjective and based on perceived individual capacities and capabilities,” as National Review put it. Rather, he acquires them because what he becomes capable of occurs within an all-important context: what he is.

Thus does the push for animal rights, and suggestions for robot rights, make sense in an atheistic age that questions what man is. After all, if man has no soul, what is he but some pounds of chemicals and water — an organic robot? He then is just another animal, only different from other animals/organic robots in his “capacities and capabilities.” As far as the robots he builds go, they differ only in the materials of which they’re composed, much in the same way that a plastic gun and a metal gun are thus different but are both still things. And note that the inorganic robots may one day have greater capacities and capabilities….

Moreover, if one of those capabilities is improved logic skills, they could take our own atheism and its implied amoralism to their logical conclusion and become what we only play at being: the perfect sociopath. It would be tragically poetic if, in an age characterized by the question “Who is to say what’s right or wrong,” we authored our own demise by creating artificially intelligent machines that walk our shallow talk.