This semester I’m taking a class called Human-Robot Interaction, for which I will be writing class blog posts, mostly responses to readings that sort of thing. I figure as long as I’m going to be writing these things, I might as well post them here, too. So here’s my first class reading reflection.
Dautenham writes “Once people can no longer distinguish a robot from a person […] then people will treat them like humans,” which seems to me to imply that she expects robots will inevitably be indistinguishable from humans in the future. I am not so sure this is likely. For one thing, the point of creating a robot, and using one, is because it is not a human - it can do something better, or cheaper than a human can, or it can work under conditions that are not practical for humans. Somewhere along the line, these differences will make it possible for people to tell that the robot is a robot, not a human.
Probably it will be technically possible one day to make androids that are, to casual inspection, indistinguishable from humans, but I doubt that will be common or desirable. People will not want to be “tricked” into thinking they are interacting with a human when they’re not, and I think for many people, the experience of interacting with an entity that is convincingly human-like but that they know not actually human, will be very disturbing. I don’t know, maybe this is just a failure of imagination on my part, and after some social readjustment, people will behave very differently towards convincing androids than I expect, but I think about the way humans treat other humans who are unlike them in some way, and I am skeptical. Intergroup relations are complicated, and often not very pretty, and many people experience a lot of fear and anger when group boundaries they consider to be important are blurred in some way.
A lot of how this might play out depends on whether or not the androids in question are truly artificial intelligences on a human scale, and what social and ethical implications a “self-aware” machine will have, which is a very complicated issue I certainly don’t think I can predict accurately. But when I see the kinds of trivial differences between groups that can lead to so many humans treating other human beings as less-than, well, I don’t have a lot of hope for the androids. What I suspect is that in general, people just will not want robots that they might mistake for humans. Of course I can come up with scenarios where someone does want to deliberately trick people into thinking a robot is a human, there are plenty of sci-fi stories about those kinds of situations, but I don’t really know enough to have an opinion on how much of a problem that sort of thing would really be.
I think maybe this blog post is more depressing and less coherent than I’d intended. It’s hard for me to be clear and focused, though, when there are so many difficult and complicated issues connected to the idea of convincingly human-like robots.