What is the role of pain in our lives? Pain, we can all agree, is unpleasant, both physically and emotionally. Pain acts as an alarm when faced with danger. Pain can be excruciating, tragic, the forerunner of death. In short, when we feel pain, we feel more alive than ever. Now that robots play an increasing role in our society, should we design robots as sentient machines with the ability to feel pain?
Robots are everywhere in manufacturing, in agriculture, in transport and distribution, in communications, in the home. And they appear not just as androids like the famous science fiction author Isaac Asimov visualized 75 years ago, but in a vast range of devices, from autonomous vacuum cleaners to whole factory production lines and military drones.
Arguably, it might make sense to endow some of them with the capacity to feel pain in situations where it could help the machine foresee a threat and save itself from possible damage. But should it be endowed with merely a series of physical reactions demonstrating pain or should it feel it as an emotion the way we humans feel it?
When a machine feels pain, will it cry?
Or an equally valid question: should it cry?
The question of whether robots should feel pain may sound futile, but it’s not. With advances in computing power, particularly with quantum computing just around the corner, we are close to being able to create robots with General Artificial Intelligence. Not just a specific ability like beating human champions at difficult games like chess and Go, but a “general” intelligence that could lead soon to the dreaded Singularity, the point where Artificial Intelligence will surpass human intelligence.
In short, we are headed towards a world where science fiction meets reality, where our planet hosts two types of “sentient machines”, us and the robots.
How to Organize a World full of Sentient Machines
Scientists have been working on this for several years, notably Beth Singler and Ewan St John Smith, both at Cambridge University.
Beth Singler is the Homerton Junior Research Fellow in Artificial Intelligence exploring the social, philosophical, ethical, and religious implications of advances in AI and robotics. She worked on the “Human Identity in an age of Nearly-Human Machines” project at the Faraday Institute for Science and Religion.
Ewan St John Smith is doing research in the Department of Pharmacology and, among the various research topics he has pursued, he has looked at the role of pain in robot development.
Together, with filmmakers Colin Ramsay and James Uren of Little Dragon Films, they have produced a documentary, “Pain in the Machine”, part of a series of short films, called the “Cambridge Shorts”. Supported by the Wellcome Trust ISSF, the shorts are designed to “give researchers the opportunity to work with filmmakers and artists to make films about their work that are creative, accessible and engaging”. The idea is to open up to the general public and engage with all of us, non-scientific people. Take a look:
Watching the autonomous vacuum cleaner being kicked downstairs certainly raises the question whether it makes sense to design pain-sensing devices. Yet this is only a vacuum cleaner. Surely a simpler method could be found to protect the machine from damage.
Once you’ve watched the “Pain in the Machine” film, you are asked to “take a moment” to complete their short survey: https://www.surveymonkey.co.uk/r/Pain…
I did, the film certainly got me thinking. In my view, most robots probably don’t need to feel pain, except in the most basic way of building into them the capacity to avoid threats that could damage them. Like the vacuum cleaner in the film.
But if the goal of a robot is to help take care of the elderly or children, i.e. engage in close interaction with humans on a daily basis, then it’s different. Endowing it with emotions and a capacity for empathy with humans could make sense. And it probably would be better to do this on both levels, physical and emotional.
Whether that leads to a new form of AI endowed with consciousness – the way we humans are conscious, of ourselves, of our environment – is another question and possibly not even a relevant one.
The objective is to make the robots as functional and effective as possible in relation to the goals (tasks) we have assigned to them. This means mimicking all our emotional reactions (in addition to physical ones) so that robots that are caregivers can interact with people in a credible, useful way. So they can empathize.
But the empathy only needs to be perfectly replicated (mimicked). It doesn’t mean the robot actually feels pain: It’s a pretend game, but the pretend part it plays has to be played perfectly if humans are going to “fall for it”. And for the care-giving to be wholly successful. Anything less than a full emotional display on the part of the robot will activate doubt and suspicion in the inter-acting humans.
All of this by-passes the ethical questions. Feeling pain is one thing. Feeling hate or a desire for revenge – say, against the person that has inflicted pain – is quite another. Yet it’s often a natural human reaction, revenge is a recurrent human emotion. Would a pain-feeling robot also run the whole gamut of inter-connected emotions? Designing machines that can replicate pain also means designing machines that could engage in morally “deviant”, vengeful behavior.
Vengefulness is not something you’d want to see in a machine.
So we need to design robots that act in a wholly ethical manner. Like selfless heroes. Here we enter the realm of Asimov’s Three Rules of Robotics and everything that is not quite right with those rules. Asimov himself in his books, starting with the famous I, Robot series, has dreamed up situations where the rules were misinterpreted, incorrectly applied or didn’t work as intended.
Asimov’s Three Rules Robotics: Why They Need Updating
The Asimov rules were intended to make interaction with robots safe for us:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm;
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law;
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
With the multiplication of non-android robots and complex devices Asimov never expected, many AI experts argue that Asimov’s rules need updating. As Mark Robert Anderson Professor in Computing and Information Systems, Edge Hill University, wrote in a recent article, referring to military drones and other future military devices: “it is only a small step to assume that the ultimate military goal would be to create armed robots that could be deployed on the battlefield. In this situation, the First Law – not harming humans – becomes hugely problematic.”
Certainly, new rules are needed for robots on the battlefield. A recent Impakter article raised the question: Should we ban killer robots? Also, a related issue: when robots do wrong, who should take the blame?
But there are other problems too. For example, with improvements in machine learning it becomes possible to write music that responds to feelings in the audience. An international research team led by Masayuki Numao, professor at Osaka University, working together with Tokyo Metropolitan University, imec in Belgium and Crimson Technology, has released a new machine-learning device that detects the emotional state of its listeners to produce new songs that elicit new feelings.
As Professor Numao explained, “We preprogrammed the robot with songs, but added the brain waves of the listener to make new music.” Numao envisions a number of societal benefits to a human-machine interface that considers emotions. “We can use it in health care to motivate people to exercise or cheer them up” he said.
The problem here is that humans are far more complex than that, especially over time: people may not want to be motivated to exercise or cheer up every time they hear music. They may well feel conflicted and hate to hear yet another run of jolly tunes.
What Kate Devlin, a computer scientist, currently Senior Lecturer in the Department of Digital Humanities at King’s College London and author of ‘Turned On: Science, Sex and Robots’ (out in 2018) has to say about all this throws an unexpected and interesting light (disclosure: I found the question of “pain in the machine” in her book).
Her book, the title notwithstanding, is a very serious essay on the future of sentient machines and how they might fit into our lives; it is only marginally about sex robots. She sees sex robots as a “very niche” market that is not likely to grow very much. Instead, her point is that, as she says in the video below, “I think we should be making really interesting forms of sexuality and intimacy with the technology we have.”
Regarding “pain in the machine”, Devlin comes to it from the “sex and intimacy” angle. To be clear: this has nothing to do with the Marquis de Sade’s experiments. It has everything to do with human consciousness and the whole range of emotions from pain to love (and sex). As she writes: “Pain gives us [humans] a better chance of survival, so it’s useful to us as a form of feedback about our environment and the dangers that surround us.”
The Role of Pain and Emotions in Machines
Both Devlin and Singler agree that pain could have a positive role for a robot, so that it can avoid damage. But, Devlin adds, “pain also has an emotional component. Do we want machines to feel emotions? Should we develop them to feel pain and other emotions so that they can feel empathy?” However, as Singler points out, machines could manifest pain without feeling it: “We may infer pain on their behalf,” she says. “We anthropomorphize the pain onto them”. Indeed. That’s what the girl at the end of her film does when she repairs her broken vacuum cleaner.
Devlin (ever concerned with sex robots) concludes: “If we can project pain onto machines, what about other powerful feelings? What about desire?”
Here is the nexus: We move from pain to its opposite, pleasure. The next step is short: Desire and sex robots.
Devlin, of course, is not the only computer scientist interested in the question. Perhaps the pathbreaker in this research area is another, first generation AI expert and world class chess champion, David Levy, author of Sex and Love with Robots. Published in 2007, it is the commercial version of his Ph.D. thesis that he defended successfully at Maastricht University (Netherlands).
In his book, Levy made the shocking prediction that by 2050 it would become acceptable (if not commonplace) for people to marry robots. Now in his seventies, Levy still defends his vision of a world filled with robot love.
This is something Devlin objects to. She doesn’t think this will happen. Not only because “you can’t marry a robot” but because “marriage itself is thankfully no longer socially crucial and for many people no longer a goal”. As she puts it: “[Levy’s] views are hopeful and his aims are laudable, and the final sentence in his book reflects his wish: ‘great sex on tap for everyone, 24/7’. Like him, I see the potential for happiness. I’m just a little more cautious.”
The upshot? Even if we accept the idea that more human-robot empathy is needed, we can’t agree on what kind of relationship we want with robots. Humans are diverse and fickle and, indeed, often unpredictable. True, machines could be theoretically coded to adapt to our whims and they never tire, but could they keep up with our moodiness?
Not everyone likes sex robots. In 2015, a Campaign Against Sex Robots was launched in the UK in response to the burgeoning industry of sex robots. On its website, it claims that sex robots “are potentially harmful and will contribute to inequalities in society” (the evidence for that is not in). The market for sex robots is undeniably tepid. For example, a 2018 YouGov survey of 1,714 adults in the UK about sex robots found that only 13 percent responded yes to the question ‘Would you consider having sex with a robot?’. Some 72 percent said no, they would “definitely not” or “probably not” (5 percent were undecided).
Voilà! Robot. pic.twitter.com/GPSeRg3fpW
— Kate Devlin (@drkatedevlin) May 13, 2019
Why this enthusiasm for a sinuous, semi-abstract, non-gendered robot? She explains it in her book: “A step into abstraction is a step away from sexual objectification and entrenched gender roles.” Both, of course, are precisely what is wrong with “the hyper-realistic, hyper-sexualized gynoid” sex robots commonly found in the market today.
We’re still a long, long way from loving robots. As Devlin says, “there is a long path ahead of us when it comes to our intimacy with robots and AI.” But the academic world has made a good start debating not just sex robots but the broader issue of consciousness in AI, exploring the ethical and social aspects of machine-human interaction and whether an emotional dimension is needed for machines. So, by the time General Artificial Intelligence becomes a reality, we might (hopefully) be ready for it.
Update 17 May 2019: The famous Eurovision contest has a very special singer in the running this year: a pink robot with a song written by Oracle AI in Israel building on all the winning songs of the Eurovision contest:
Kate Devlin would appreciate: This singing robot is not at all a hyper-sexualized gynoid. And the music? Judge for yourself…
[et_bloom_inline optin_id=optin_4]