Is your smart phone really smart? Do you ever fear it will get too smart? Will it wake up one morning and decide to start running your life – deleting contacts it doesn’t like, booking holidays online that it wants to go on with you or shifting your calendar appointments to suit its tastes? Perhaps, more realistically, you may be inclined to feel that your printer has a mind and mood swings of its own, seemingly out to get you when you are facing the most desperate deadline. But actually, the more we progress in the field of robotics, the more we are forced to recognise and appreciate that the mind is a unique wonder of the living world. Can technological advances escalate robot intelligence into the personal minds we enjoy as humans? How does this speak into our ideas of human uniqueness and our unique ability to relate to a creator?
Artificial intelligence (AI) is technology that simulates intelligent behaviour by performing programmed tasks. A chess robot can bring chess masters to tears with its seemingly great intelligence, robots have been assisting surgeons with surgeries since 1984, and mobile phones now even ‘ask’ how they can help you! These machines seem to match our skills and intelligence in ways previous generations would not have dreamt. However, if you ask that same surgery robot for a friendly game of chess or reply to your mobile that ‘yes, you would very much like it to perform keyhole surgery on you while you wait for the ambulance’, they won’t appear intelligent or even competent at this new task. AI is domain-specific and restricted to the specific expertise.
That said, technology is advancing at such a fast rate, could we ever get to a point where a single machine could have a more general purpose. Artificial General Intelligence (AGI) represents this next, as yet unrealised, step. If AGI robots could be programmed with all the expertise humankind possess as well as an adaptability to learn new things, will they be intelligent in the same sense as us – or perhaps beyond? Will this new human-like general ability lead to the apocalyptic style scenarios painted by movies like iRobot and Terminator where machines have not only developed immense capabilities, but also minds of their own?
Last month I sat in on a talk by Angus Menuge, professor of philosophy at Concordia University Wisconsin, on ‘Artificial Intelligence and the Metaphysics of Mind’. Menuge clearly laid out what he thinks AI can and cannot achieve. He believes that AI could indeed advance into AGI. However, he defines two different levels of AGI, with the concept of mind at the heart of the distinction.AI possesses operational intelligence. A robot can be programmed to do what humans require intelligence to do, so their actions appear to be intelligent. For example, modern technology can be incredibly good at processing patterns but the machine does not have an intelligence which understands the contents or realises any significance. With this functional idea of intelligence in mind, conventional AI could indeed develop into a ‘weak AGI’ by becoming more general and adaptable in its skills. However, there is a metaphysical gap between this and ‘strong AGI’ in which the machine has a mind and intrinsic intelligence of its own. For this strong AGI to succeed, the robot must be its own ‘subject’: an individual with goals, reason and creativity.
We as humans unify many mental states in one mind so there is somehow one me-ness to me, a subject which enjoys unique and personal intentionality, purpose and rationality to create new ideas, integrate experiences to make decisions, set goals for the future and live a conscious life. For a robot to be mindful, it requires certain qualities for itself. Menuge believes that four of these are unattainable for AGI.
The first obstacle is subjectivity, because thoughts need an owner. To have your own state of mind you need a unique and unified subject to persist over time. Artificial intelligence machines are physical aggregates of separate parts and so have no unified subject at any one time which can acquire its own personality.
Secondly, the subject requires intentionality; the ability to think about what symbols mean, to think about others, about future events, to have ideas of what is desirable. Intentionality is personal and does not emerge out of the impersonal, physical states of an AGI machine.
Third, subjects have a unique sense of teleology (purpose) which requires that they have their own personal goals. This is not the same as a robot working towards the derived goals of its designer.
Finally, there is rationality, which represents our ability to reason. Rationality rules can be programmed into a robot, but true rationality would require the machine to be able to reason for itself—to reason differently to how we or the programmer reasons. Mirroring reason is not the same as reasoning.
If Menuge is right, the fact that a programmer cannot impart mind-qualities to AI and AGI machines can leave us reassured that not only are our printers not attacking us, but that the more human-looking robots will be limited in how far they can actually go to becoming just like us.
Consciousness and mind are hard to explain, and according to Menuge, also impossible to re-create in purely physical systems. Why should this be? Why is there something special about us? If scientism suggests that we are nothing more than organic machines, we should be able to be fully replicated in every respect. It does seem that we are somehow more than our biological parts. This fascinating and growing field is providing us the opportunity to engage with discussions about the uniqueness of human beings. I am thankful that I have been introduced to this issue, and think that the Christian viewpoint can bring a lot to the table as we think about human identity in an increasingly robotic age.
If you are interested in this topic and would like to hear more I recommend following the current Faraday Institute project ‘Human identity in an age of nearly-human machines – the impact of advances in robotics and AI technology on human identity and self-understanding’ via the facebook page, twitter account or by joining the upcoming short course for this project in Cambridge this September.