What if a robot could feel pain – and not just an automatic response to a potentially harmful stimulus, but an emotional experience of pain? This is one of the questions that I asked Dr Beth Singler, a Research Associate at The Faraday Institute, in today’s podcast. The human pain response is extremely useful but can we, should we, or do we need to give it to robots? (Transcript below)

Can you tell us first what brought you to the Faraday Institute?

I started on this project back in February this year. Previously I’d been finishing up my PhD and I actually saw the job advertised when I was at a conference. What really excited me about it, and I will proclaim this, I am a huge geek. I’ve always been into science fiction, fantasy and other things, so I already have quite a broad knowledge of representations of artificial intelligence and robots in popular culture – and then seeing that the project has the consideration of what it means for human identity when some of this technology advances, and also taking into that ideas about ethics and philosophy. As a social anthropologist I am also very interested in people’s responses to technology. That was certainly a strong thread in my PhD thesis. Continuing that sort of research area was very interesting to me, so I came to the Faraday in February.

So your project is on what it means to be human in an age of nearly human machines?

Yes

How did you get to a point where you were equipped to study that – apart from having this huge knowledge of AI?

Well my PhD thesis and all my research so far has been looking at people’s religious innovations: new religious movements, new areas of spirituality that people move into. But I’ve also been looking mostly at the ones that are picked up and played with by people who have quite an interest in science and technology, although their interpretation of it might be called pseudoscience rather than hard science. The groups I was looking at for my PhD thesis are primarily online groups and that’s where people’s responses to science and technology, and specifically AI, tend to be. So I am still spending a lot of time on social media seeing conversations as they’re happening and trying to engage in some of those conversations as well.

So where are you at with the project now?

We’ve been doing lots and lots of different things. Some of it is fieldwork, getting an understanding of what people are saying and thinking, literature reviews, reading some of the great books that are out at the moment on the subject of Artificial Intelligence and where it might be going. We’ve also been trying to produce things, so we’ve had events like a short course we held here at the Faraday in September. A theological workshop was a part of that, so we’re starting to ask theological questions as well, and there may be publications eventually on that. We also made a short film about the big question of could and should robots feel pain, which is doing really well. We made it in association with the University of Cambridge Shorts scheme and the Wellcome Trust who funded it. It has been up online since the end of October and we are already up to nearly 12,000 views which I am really pleased about.

What is the main storyline of that film?

13059025583_8b05189b36_kWe tried to make it quite accessible, so it does have a bit of a narrative to it. It starts with a woman who gets slightly frustrated at her Roomba (a currently available piece of technology – a robot vacuum cleaner) and suddenly kicks it down the stairs. That is the opening into some of these questions about ‘how does the mechanics of pain work in the human being? Is that analogous to how it could potentially work in machines?’ It’s quite hypothetical, but once we get to that ‘could’, then we start debating the ‘should’ and we’ve got some really interesting experts from various different fields –  philosophers, cognitive scientists, ethicists – discussing, hypothetically, would we want robots who could feel pain. There’s benefit in that in a technical sense, responding and retracting from pain is a good idea, but also the emotional elements of pain, is there a right to a freedom from pain for machines as well? And then getting into the more science fiction areas we do have clips from various different sci-fi films and TV shows in it as well.

It’s a really fun film! As far as I can understand, part of the message was that pain is a useful thing. It’s a very real part of who we are as human beings, but the question is whether we can or whether we should give that experience to robots. What is your own view there?

human-vs-robot-09

Human vs robot By FRANZ STEINER [CC BY-SA 4.0], via Wikimedia Commons

As a social anthropologist I’m quite interested in the human response to artificial intelligence and robots, and that is certainly a big part of our project. What I think is likely to happen is that we may not be able to prove either way whether robots or machines feel something identical to humans, but as they say in the film, and I think is true, people will continue to infer pain – especially if a machine comes up to you and says ‘I feel pain’. That’s a great lead into your inference that they have actual pain in the same way as you. Even if they don’t – for some of the examples we used in the short film, the Boston Dynamics robots, there is nothing in their system that could even remotely be called pain, so there is no data in that sense – but because people saw footage of the prototypes being pushed and shoved, people (online primarily) tended to respond as though they were suffering and in pain. I think that is going to be a key aspect of how our relationship with even non-sentient AI and robots is going to continue to develop, because humans will keep inferring different things and reacting in a particular way because of those inferences.

Yes, because we can confer character on a rock, let alone a robot.

Absolutely! My friend had a pet rock years and years ago…

Is part of what you’re asking, what does that do to us when we start interacting with these things and projecting these feelings of pain on them?

Yes, absolutely. We’re interested in how our concept of our place in a wider cosmology may or may not change. We’re increasingly seeing scientific research on where animals would fit into our understanding of levels of intelligence and consciousness (which is a very difficult term we can’t really define or prove). Bringing in robots or artificial intelligence as another potentially sentient being within that wider cosmology, our understanding of our place on the earth and in the cosmos is going to be a really interesting change. I’m not so worried about anthropomorphism. I’m probably more concerned about the objectification of machines beyond the point where we actually get to a level of co-intelligence. So what I am saying is, hypothetically if we had human-equivalent robots – intelligence-wise – and we continue to treat them like objects, what that means about humanity? Likewise with animal rights, we’re increasingly proving or seeming to prove levels of intelligence, and we still continue to objectify and often mistreat animals. Historically speaking, humanity, or Western civilisation, has not always had the best track record of dealing with other cultures that they presume to be of lesser intelligence, or they did presume at the time. So we need to learn some of those lessons from history perhaps as we go forward.

Interesting, interesting. So would you say then that we should avoid giving robots pain so that we don’t have to worry about those things? Or do you think there would be an advantage in giving them those experiences?

I think whether we give them pain or not, we’re going to still have to have some of these conversations. The advantages of a concept like pain – the obvious ones retracting from bad situations – may come with the emotional side that some of the experts in our short film believe is so intrinsic to it, we don’t know. So we’ve got to be thinking ahead of what we’re doing. There’s a lot of talk about what could happen, and we need to be thinking and discussing it now.

It is more that this is something people are already thinking about trying to do and we just need to be ready?

We have actual examples of ways in which technology is heading towards things like synthetic skin and nervous responses to stimuli.

Interesting, so they’ll be with us! And you’re working with two people from other departments, one of them from outside of this University.

Yes, Professor John Wyatt is a biomedical ethicist with a background in perinatology and Professor Peter Robinson is a computer scientist, both of them with a faith background and a long-established relationship with the Faraday Institute. So it is a good team to bring together. We’ve got very different areas of expertise.

It will be very interesting to hear what comes out of the project. Now there is a survey that you are doing as part of this.

Yes, attached in the description on the YouTube page for the short film. It was a way of continuing the conversation, or at least allowing the conversation with those people who do chose to watch the film. Not everyone is going to respond to the survey, and that’s fine, but we also wanted to get an idea of who was watching the film, so there is some basic demographic information in there; whether they felt like they’d been educated by the film, it would also be useful for us to know if we’ve actually got them thinking, told them some things they didn’t necessarily know before. Finally, there is lots of space for comments so we can find out what people are interested in, if they have a positive or a negative reaction to the film, whether there is something else we could have considered but we didn’t. We’re getting responses now, but we’d like more. We have had about 12,000 views of the short film and that shows that there is definitely a lot of interest in this area.

It sounds vital. Thank you Beth for your insights today, and it will very interesting to see where this work goes and help us think about how we interact with robots in the future.

Please watch the film and fill in the survey: Pain in the Machine

beth_smallBeth Singler is a Research Associate on the Human Identity in an age of Nearly-Human Machines project.  She is working with Professor John Wyatt and Professor Peter Robinson to explore the social and religious implications of technological advances in AI and robotics.Beth’s research explores popular and religious re-imaginings of science and technology.  She is an experienced social and digital anthropologist of New Religious Movements, and her recently completed PhD thesis is the first in-depth ethnography of the ‘Indigo Children’ – a New Age re-conception of both children and adults using the language of evolution and spirituality. She has also been published on the development and legitimation of Jediism and Scientology through social media.

Beth has completed three Theology and Religious Studies degrees at the Faculty of Divinity, Cambridge: undergraduate, masters and PhD.  She has taught at the Faculty on “Understanding Contemporary Religion” and “Topics in the Study of Religion”, as well as presenting her work to the public through the Cambridge Festival of Ideas as a “Cambridge University Rising Star”.  She has also been interviewed for the BBC’s Inside Out documentary series, and by the BBC News, LBC radio, and Radio 5 Live.

Twitter: @BVLSingler