wooden-791421_1920 pixabay crop


How will developments in AI and robotics change the way we think about what it means to be human? This was the question that Professor John Wyatt, a medical doctor with a long involvement in this discussion, asked in his lecture at the Faraday Institute summer course this month. I’ll summarise his answer here.

We can use machines to understand ourselves as human beings, describing ourselves as ‘hard-wired’ to like music, ‘programmed’ to obey our parents, or ‘self-replicating information-processing system[s]’. Some people see research on robots as a way to understand human identity.

We can also relate to machines in a human-like way (anthropomorphism), seeing them as entities that we can have a relationship with, or need to look after. A big trend at the moment in robotics and AI is to produce very friendly or childlike robots that we can relate to very easily. For example Hanson robotics has produced Sophia, the very human-looking ‘friendly and empathic AI’. Or there are less human-like robots which play up the cuteness and relational factor as much as possible. The company that makes the entertainment robot Jibo claims that “He loves to be around people… and the relationships he forms are the single most important thing to him”. There’s also the online ‘woebot’ app, which is a simulated CBT counselor to help people work through their difficulties.

For John, a machine understanding of people can be useful. We can understand the brain better if we sometimes think of it as a computer, and cognitive psychologists might at times think in terms of software. But that doesn’t mean that we are machines. These are metaphors, not descriptions. We can talk in terms of ‘machine understanding’, but only persons can truly understand.

Manufacturers would like us to see their robots as persons for all sorts of commercial reasons. We might get attached to their products if we relate to them in more human way. We might use them more, or want to upgrade them if we have a relationship with them, but our instinctive compassion for human-like objects lays us open to manipulation or deceit by something that is essentially a manufactured product. Is that a healthy relationship?

What might our relationships with robots do, John asked, to human relationships? In the same way that a child can get away with being rude to Alexa, will the proliferation of human-like robots provide yet another way of teaching people to use each other in a selfish way? We have animal protection laws, not just to safeguard animals but because it seems to affect our humanity if we abuse something that shares so many of our capacities. Will we soon need robot-protection laws for the same reason?

For some, there might be a point at which a robot is considered to be a moral agent in its own right, and worthy of respect. We only know that other humans are conscious by a process of intuition: I think so I presume you think too. But how could we ever know if a machine was conscious?

From John’s perspective as a Christian, personhood is something that is foundational, and not reducible to matter and energy. In the same way that the persons of the Trinity interact, human persons are free to interact with each other, giving and receiving love. To be a person is also to participate in divine life: ‘you love me therefore I am’ (Amore ergo sum).

The Jewish Philosopher Martin Buber argued that there are two types of relationship: I-it and I-thou, or you. For him a machine will always be an ‘it’, but John pointed out that machines will increasingly seem like a ‘you’. What’s more, if our society increasingly sees human relationships in instrumental ways, treating each other as an ‘it’, will we spot the problems with producing human-like machines that do the same?

For the theologian Karl Barth, who was influenced by Buber, there was something more profound in the ‘I-you’ relationship than in a relationship with an ‘it’. John’s own focus is on the Nicene creed, which says that the Son was ‘begotten, not made’ by the Father. Children are part of their parents in a very real way, but they also have their own personalities. On the other hand, machines that are made are a product of our will, and are ours to mould and control. Having children seems to work ok for most parents, but do we trust ourselves to do the making of human-like robots?

So while intelligent machines will increasingly play role of a person in their interactions with us, John feels that we cannot share our nature or take part in a genuine relationships with them. We need to distinguish between human and machine. The question now is, how can we regulate the use of robots so that they enhance, and don’t threaten, human flourishing?