Robots as caregivers

in Musings

Tapus, Mataric & Scassellati, “Grand challenges of socially assistive robots”

Sparrow & Sparrow, “In the Hands of Machines”

This is another set of robotics articles that had me mainly thinking about artificial intelligence, with talk about things like giving robots personalities, empathy, and the ability to adapt to complex long-term changes. The point of designing a socially assistive robot for patient care, of course, is for what it can do that human can’t. In place like the US with a proportionally old population, there is a shortage of high-quality long-term caregivers, mainly because such work is difficult and demanding, as Sparrow and Sparrow point out. One caregiver may only adequately care for a small number of patients, and the work is often both emotionally and physically stressful. Robots don’t feel emotional or physical stress like humans do, they can (assuming no accidental malfunction…) function for long periods of time and on demand rather than needing time to rest and care for themselves, they could stay in a person’s home and offer more independence, etc.

But I find myself wondering, again, if we make a robot that can adequately fulfill the criteria that Tapus, Mataric and Scassellati set out, will it still have all the advantages over humans that we see in robots now? There is an idea in the AI field of many problems being “AI-complete,” meaning basically that in order to do one piece of what we think of as intelligent behavior, such as natural language processing, an AI must have all the other general intelligence capabilities as humans. The belief is that understanding language the way humans do requires thinking the way humans do, on a deep level. I suspect that emotional and social understanding are similar. In order to actually interpret and respond to human emotion in a successful way, without upsetting or alienating people, a robot needs to think and “feel” in ways similar to humans. And if we create a robot with a simulation of a complex inner emotional life, will it end up having emotional needs similar to humans? Will a robot like that feel something like stress, and need time to relax or have fun in order to be effective emotionally responsive caregivers? And that’s completely leaving aside the questions of self-awareness and personhood and whether or not human-like AI will be given human-like moral status.

Obviously I don’t have the answers, I can’t know what will happen when or if we create artificial intelligence with human-like capabilities. What I am quite certain of is that there will be consequences none of us can foresee, and that I do not count on them to solve for good the problems their creation will be meant to solve.

I’m afraid I’m with Sparrow and Sparrow in thinking that what will be much more immediately beneficial for caregiving is to focus on devices that do much simpler tasks, that are relatively easy for robots and that humans can’t add a lot of value to. I’d want to augment existing human capabilities without trying to replace them, and to complement human strengths rather than creating something that needs to duplicate them in order to function well. I’m not sure robotics is the best way to achieve those goals.

Sparrow and Sparrow also bring up numerous ethical implications of robot elder care (that I think also generalize to robot caregiving in other contexts), which I think are very interesting and important, but if I tried to write about any of them I don’t think I’d be able to stop myself. I wonder if the robotics field has much awareness of Value-Sensitive Design.