Sept. 24, 2001 --If you create a machine that is capable of independent reasoning, have you created life? Do you have a responsibility to that life or have you merely assembled another piece of clever hardware that will be rendered obsolete by the next new thing?
In the Steven Spielberg-Stanley Kubrick film AI (as in artificial intelligence), a robot manufacturer creates David, a synthetic boy who is programmed to love. His human owner starts a program that irreversibly fixes the cyberkid's affections on his owner.
But by designing and building David, the robot maker has created another Frankenstein's monster. The apparently self-aware "mecha" (short for "mechanical") aches for love from his human "mother" and yearns like Pinocchio to be made a "real" boy.
The film raises both intriguing and troubling philosophical questions about what it means to be human, to have a sense of self, and to be a unique, independent being worthy of respect and rights under the law.
When David, acting to save himself from the taunts and threats of flesh-and-blood boys, accidentally injures his owners' son, he is abandoned in the woods and left to fend for himself. He finds himself in the company of freakish, broken, half-formed robots that stay "alive" by scavenging spare parts from a dump.
But just because David cries and pleads to stay with the woman he calls Mommy, and flees when he is tracked down by bounty hunters, are his instincts of terror and self-preservation genuine, or are they merely a brilliant mechanical and electronic simulation of how a real boy would respond? Does it matter?
I Think Therefore I Am?
Nick Bostrom, PhD, a lecturer in philosophy at Yale University in New Haven, Conn., says it does matter.
"I think that as soon as an entity becomes sentient -- capable of experiencing pain or pleasure -- it gets some sort of moral status, just by virtue of being able to suffer," Bostrom tells WebMD. "Even though animals don't have human rights -- and most of us think it's acceptable to use them for medical research -- there are still limits. We don't allow people to torture animals for no reason whatsoever."
Frank Sudia, JD, has slightly different criteria. He says the ability to make and act on one or more choices out of multiple options, and the ability to decide which of thousands of possibilities is the best one to use in an unforeseen situation, may be a basic, working definition of what it means to "be."
"If the machine has the power of self-production -- if it can seek its own goals or even pick its own goals from some list of goals it reads about in the newspaper [and decides], 'Oh, I want to look like Madonna,' -- I think that this ability to choose, guided however it might be, is indistinguishable from what we consider to be our sense of self," he tells WebMD.
Sudia is a San Francisco-based e-commerce security consultant and self-described ethicist, scientist, and thinker about intelligent systems. He likens the role of the artificial-intelligence systems designer or robot-maker to that of the parent of an adolescent.
"The teenager starts to have a good variety of responses [but] not a really great restraint system," he says. "You're trying to form their character in such a way that they will make reasonable choices that will be socially beneficial for them. So you play God to an enormous extent with your children. Forget about forming them into Mozart -- you try form them into something that can survive by getting them to have a self."
I Make Choices, Therefore I Am?
The ability to make choices alone does not suggest autonomy, Bostrom points out. The computer Deep Blue defeated chess grand master Gary Kasparov. It can choose from among millions of possible chess moves in a given situation, but just try sending it across the street to buy a quart of milk.
"In order to grant autonomy to a human, we require quite a lot of them," Bostrom says. "Children don't have the full range of autonomy, although they can do more than choose chess moves or make simple choices like that. It requires a conception of their well-being and a life plan and that kind of thing. I don't think any machine that exists on earth today would have either sentience or autonomy."
For us to say that a machine is self-aware and therefore is a conscious being, we must first know what it is to be aware. At least one human mind contends that when it comes to the nature of awareness, we don't have a clue.
Margaret Boden, PhD, professor of philosophy and psychology at the University of Sussex, England, tells WebMD that it may well be possible to create a robot that appears to be a self-aware, autonomous being.
"In principle there could be a computer simulation of such a creature, because everything the human mind does depends on the human brain," she says. "But if you're asking me whether that robot would be conscious, I would say that we don't even know what it is to say that we are conscious."
Even if we suppose, as Spielberg and Kubrick do, that it's possible to create a robot capable of acting in its own interests and of feeling pain, loss, and loneliness, will we treat it as one of us, or as just another smart toaster?
I Buy Groceries, Therefore I Am?
If we can be emotionally manipulated by a movie -- another form of simulated life -- or if we enjoy the Las Vegas version of Paris, then we could certainly be affected by the crying of a robot baby or the pleadings of an artificial boy like David in AI. And it's that interface -- the box that contains the hardware (a robotic brain) and the way in which the software interacts with the user that may make all the difference.
"If an AI got to look like a dog, maybe it would have the rights of a dog. ... If it got to look like Einstein, maybe it would have the rights of an Einstein," Sudia says.
It's certainly possible to design an intelligent system that could, say, do the grocery shopping and pay at the register for us. To do this, it doesn't have to look like a human, says Ian Horswill, PhD, assistant professor of computer science at Northwestern University in Evanston, Ill.
"You can have systems which to all intents and purposes are intelligent -- at least a lot more intelligent than pencils or word processors -- but don't have the ... characteristics of human existence," Horswill tells WebMD.
There's no reason, for example, that a shopping robot needs to look like your Uncle Chuck. It could be a rolling cash register -- a simple box with a screen, grabber arms for taking boxes of corn flakes off the shelf, and a drawer for holding the change. But it would still be an "it" and not a "him" or a "her," Horswill contends.
"You could build a machine with a Commander Data-like body and give it emotions, and then remove its brain and put it in a trash-can robot with a cash drawer and only allow it to communicate in Morse code," he says, " My guess is that most people would be much more willing to switch off the trash-can robot then they would Commander Data.