I wanted to cite a lot of SF novels while writing this

in Musings

(this is a response to Chapter 2 of The Design of Future Things by Donald Norman)

Don Norman clearly believes, like many others including myself, that the future of robots is closely tied to the future of artificial intelligence. I double-majored in computer science and psychology as an undergrad, which is the sort of thing that most naturally leads either to Human-Computer Interaction, or to Artificial Intelligence. I’m also sort of a geek and have been reading and watching science fiction for most of my life. So I’ve thought about artificial intelligence a lot.

What I really wonder about with AI is, just how much like humans will it be? Currently the reason we use computers is because they are very good at doing some things humans are not so good at. And the only model we have for human-style intelligence is, well, humans. If we build a computer that is good at human-style things, will it end up having human weaknesses, too? So many of those weaknesses that computers are intended to correct, are side effects of the mechanisms that allow us as humans to do the things humans are good at doing. Will a computer that has intrinsic motivations like humans, and the ability to reason from incomplete information like humans, end up being just as biased in their reasoning as humans? Will they end up being argumentative and uncooperative, demanding payment and benefits and certain standards of treatment, instead of tireless and uncomplaining workers? If we deliberately try to design Ais so that these things don’t happen, will that end up handicapping their ability to perform the human-like tasks we want?

I’ve never met a sentient being that isn’t a human, so it’s hard for me to imagine what sort of intelligence could do the things humans do without presenting many of the same kinds of difficulties that working with real humans has. There are some people who confidently assume that we are capable of creating AIs that will be much smarter and less fallible than we are, and that such a future is inevitable. I don’t know. I believe that some kind of human-like AI is possible, and maybe even inevitable, given enough time to work on the creation. But I don’t really know how those intelligences will be different from us. It matters a whole lot to what those future human-robot interactions will be like, though.