Should Robots Have Rights?

Articles

by senior futurist Richard Worzel, C.F.A.

Does a robot have rights? Not now it doesn’t, and it’s tempting to say a robot will never have rights because it is property. However, you could have made a similar argument about certain classes of human beings at various times through history, from women in western society until the 20th Century and still in some parts of the world; to some tribes of people of African descent prior to the middle of the 19th Century; to young people, mostly women, enslaved for prostitution in various parts of the world today.

OK, but a robot is not a human being, so it can’t be entitled to the rights of humans. But then, neither is a corporation a human being, yet it is entitled, under law, to most of the rights of human beings in most societies around the world. Indeed, not only are corporations not human, they have no physical existence, which robots do, yet despite this they can own things. So being human is not necessary, under law, to have rights.

In other words, there is precedent, both in history and in current practice, for entities to be considered to be property and non-human, yet still to have rights.

How about a soul?

How about a soul? A robot can’t have a soul because it’s a made thing, manufactured. Well, taking Christianity as the most widespread religion in western civilization, the Nicene Creed talks about Jesus Christ being “the only son of God, begotten, not made,” with the implication being that the rest of us, being descended from Adam and Eve, were made, not begotten. Made by God, to be sure, in His own image, but made nonetheless, so a robot cannot have a soul, since it wasn’t made by God but by a human.

Yet, again, clearly a corporation has no soul (and some would say no morals, either), yet it has rights, even though it was made by humans. So, under the law (which is where rights exist, and from whence they spring), an entity defined as a person does not have to have a soul in order to have rights.

OK, putting corporations aside as merely a legal fiction that’s convenient, but not “real” (whatever that means – try telling the managements of Google or Apple that they’re not real), a robot is not intelligent, and therefore can’t have rights. But will robots be intelligent in the future? How will we be able to tell if a robot is actually intelligent, or just fakes it well? After all, a computer, IBM’s Deep Blue, beat the human chess champion at what is perhaps the quintessential human game of chess in 1997. Another IBM computer, Watson, beat the human champions at a very popular television game show, Jeopardy!, which is based on the whimsical use of human language plus general knowledge, in 2011. Computers have been able to come up with solutions to problems that humans haven’t solved, as in evolutionary algorithm software that has come up with new, patentable devices. And eventually, in the not too distant future, we will create computers or robots with whom we can carry on conversations, that can solve problems we are not capable of solving, and do things that we can’t do.

…Or intelligence?

How do we tell if a robot or a computer is really intelligent? Where is the dividing line between intelligent and not-intelligent? Having spent more than 40 years pondering this question, my answer is: you can’t. At some point, robots and computers will be so sophisticated that we won’t be able to tell if they are truly intelligent, or just simulating intelligence so well that we can’t tell they’re faking it. And as one of my university professors once told me, “A difference that makes no difference is no difference.” If robots and computers seem to be intelligent, then for all practical purposes, they are intelligent.

Indeed, the argument has been made that humans are not intelligent, and this is intended seriously, not as irony or sarcasm. The people who make this argument contend that the brain is a very complex computing machine that is responding in a very sophisticated, but mechanical, manner to environmental stimuli. If robots can only simulate intelligence, such people would argue, then so are we – at which point my professor’s point makes all the more sense: it makes no difference if we are intelligent, or whether we just seem so.

How about component parts? A robot is made of nuts and bolts and circuitry, not flesh and blood. Only a flesh and blood human (corporations aside) can be a real person. But if that’s logically true, than would someone with a pacemaker embedded in their body, or with a prosthetic limb, or many prosthetic limbs and replacement parts, be less human because they were part flesh and blood, and part machine? Then, too, consider that the human body is just another kind of machine, one made of meat instead of metal, it’s true, but still very much a machine, and one we are edging towards being able to replicate. Suppose we get to the point, say 20 years from now, where we can create a robot from flesh and blood (also called an android). Would it then be entitled to the rights of a person?

Emotions?

Or how about emotions? We can say that a robot doesn’t feel or experience emotions, and therefore can’t be human. But neither does a corporation. And, over time, robots will be able to simulate emotions – human emotions – with any given desired degree of verisimilitude. (This will probably happen first with sex robots.) Will it make a difference if they feel these emotions, or just simulate feeling them? (And yes, I have thought of the smart remarks about faking orgasms.)

My point is that a clear, absolute definition of what a person is, is not only very difficult, if not impossible, to create, but any definition will be tested repeatedly. Indeed, the body of science fiction literature has tested the question of created-person (robot) rights for decades. One of the first such explorations was a short story by Robert Heinlein, the first ever Grandmaster of science fiction (as deemed by his peers), in a story called “Joey Was a Man”, published in 1947. Heinlein’s conclusion was that any entity capable of asking to be recognized as a person should be considered to be a person. Isaac Asimov, in his groundbreaking I, Robot series of novels and stories, explored this theme, especially in “The Bicentennial Man”. His answer was, in essence, that the definition of what a person is depends on what they do, rather than how they are made. More recently, Rob Sawyer’s Wake trilogy of novels applies similar logic in a very compelling fashion to the World Wide Web when it “comes alive”.

Regardless, this is a legal issue that will have to be addressed, and probably sooner than we expect. It is true that robots are still a very long way from being as flexible, capable, teachable, and just plain smart as human beings, but the things that humans can do that robots cannot is shrinking very quickly.

Another class of potential people?

But there’s another, very different, area in which the question of personhood will arise – indeed, is already arising: do animals have rights? Or rather, what rights do and should animals have? In December of 2013, animal-rights lawyer Steven Wise filed a lawsuit, seeking to have a chimpanzee named Tommy given the right to liberty. Other lawsuits have trod similar ground. There’s even a website (naturally) for the “Nonhuman Rights Project.” (www.nonhumanrightsproject.org) And the arguments for according rights of personhood will be very similar to those relating to robots and computer intelligences.

Animals can’t have rights because they are property. They can’t have rights because they are not human. Or because they don’t have souls. Or because they’re not intelligent. I’ve already dealt with all of these issues, and while the answers will be slightly different respecting animals, the conclusion remains the same: this is not a simple question with clear-cut answers.

Moreover, with animals, other issues come into play as well. If robots can’t have rights because they’re made of nuts, bolts, and circuits, does that imply that animals, being made of flesh, can have rights? Does one argument impair the other?

Or what about emotions? Anyone who has had a dog or a cat knows for certain that they feel emotions: happiness, sadness, greed, guilt, fear, anger, and more. That they are not as intelligent as humans becomes a matter of degree rather than a simple yes-no answer. And if humans were created by God, then so were animals (although perhaps not in His image).

Animal rights will come first

Of the two, the rights of animals will press upon us much more quickly as animal rights organizations, such as People for the Equitable Treatment of Animals, have become increasingly active (some would say militant), and taking steadily rising number of cases to human courts.

Those groups who champion the rights of animals, and those who champion the rights of robots may initially be antagonistic to each other, viewing the other group as making their fight harder, or making their arguments seem absurd. Or they may come to work together, reasoning that if they can widen the definition of personhood for one, it will help both. At which point, many humans will ask themselves, “Whose side are they on?”

I suspect that these questions will eventually fall at the feet of the court systems, which may mean that we will, for a time, have many different answers as there are many different court systems. Eventually, though, I believe that a working definition of personhood will emerge that will go beyond just humans (and corporations), and embrace some, but not all, robots, computers, and animals. I suspect it will be a working rule of thumb (and you might, out of interest, look up the etymology of that phrase) that accords rights when certain needs are involved (such as the right to existence or liberty) and when certain capacities are demonstrable (such as Heinlein’s ability to indicate a desire for personhood). But this is an incredibly complex question that humanity will not be able to put off for very much longer.

It’s going to be interesting.

Comments on this entry are closed.

  • Bryce Jun 10, 2014 Link

    My own definition of intelligence in this context is sentience, and for me, sentience is chiefly defined by fear. Specifically, fear of death, or of erasure, or of no longer existing. The rights we confer upon human beings and now increasingly animals are rooted in an attempt to alleviate suffering and fear. The day I computer or robot says ‘I want to live’, unprompted by programming and without coercion, I believe they deserve the same rights as any living creature on Earth. Nowhere has this idea been explored more profoundly than in the final moments of Kubrick’s 2001 when he pleads with Dave to ‘take a stress pill, and think things over’ after he starts to remove his memory. Quoth HAL: ‘I’m afraid. I’m afraid, Dave. Dave, my mind is going. I can feel it. I can feel it.’ Still such a powerful meditation on what it means to be alive, and the difference between ‘intelligence’ and ‘artificial intelligence’.