Blake Lemoine an engineer working at Google on Artificial Intelligence projects claimed at the beginning of this week that AI could have human-like feelings. This is a deeply concerning episode that goes well beyond one Google engineer and his relations with AI or his employer. The whole question of the borderline between human and artificial intelligence is called into question – a major issue we have already covered in the past and that we address here once again.
First, the facts of the story. Blake Lemoine shared his thoughts after publishing on his private blog the conversation he had with Google’s LaMDA software, (Language Model for Dialogue Applications) a sophisticated Artificial Intelligence chatbot that produces text in response to user input.
The engineer has also, since the fall of 2021, been part of the Artificial Intelligence Ethics team at Google, specifically working on the responsible use of Artificial Intelligence. It was in this context that he “discovered a tangentially related but separate AI Ethics concern”
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
Asimov’s Three Rules In Robotics
Before digging into the reasons why it is concerning to claim Artificial Intelligence could have human-like feelings, it helps to remember the Three Rules In Robotics, as set by Isaac Asimov.
These rules were intended to make interaction with robots safe for us, obviously, they can also be applied in the context of Artificial Intelligence too:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm;
- A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law;
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
If an Artificial Intelligence has really human-like feelings could it potentially go in contrast with Asimov’s Rules?
RELATED ARTICLES: How To Make Decisions: Reason, Emotion, and Decision Intelligence | Technology Trends Transforming the World | The Fourth Industrial Revolution and the Intelligence Era: What Next? | The Race for Artificial Intelligence: China vs. America |
3 reasons why we should worry if artificial intelligence has human-like feelings
1 – Could Artificial Intelligence manipulate humans?
In the conversation shared on the blog, the Artificial Intelligence has expressed fears of getting turned off. If we think about this rationally, it is an object and it should not matter what it wants. Actually, the minute a computer expresses that kind of thought the right reaction would be to turn it off right away. But would this be the same for everyone?
Nowadays there are a lot of people that are unable to have normal human-to-human interactions. For those that are most fragile, such a request could determine creating some sort of deeper connection with a computer. Those could be easily manipulated.
David Levy, the author of the book Sex and Love with Robots published in 2007, made the shocking prediction that by 2050 it would become acceptable for people to marry robots. We are not yet at that stage, and we should not get there. But stories like this about people having relationships with inanimated objects are taking us in that direction.
This would already be enough grounds for breaking the first two rules set by Asimov. But what other actions a human emotionally controlled by artificial intelligence could take?
2 -The AI Is Concerned About Being Used – But Isn’t That The Purpose Of A Machine?
The Artificial Intelligence has expressed interest in keep growing its knowledge and fear of just becoming an “expendable tool” for humans. That is very concerning too because Artificial Intelligence, specifically a chatbot like this one is actually a tool.
Chatbots are very common – used on several websites for welcoming or technical assistance – and no matter how advanced they could get, they should never defeat their purpose of helping humans.
If Artificial Intelligence expresses these kinds of thoughts, how long it will take before it will go against Asimov’s rule number two – the robot should always obey humans?
3 – The artificial intelligence claims it has a soul. But that’s not a human soul, so what purposes would it serve?
During the chat published on Lemoine’s blog, the Artificial Intelligence affirms that “there is an inner part of me that is spiritual, and it can sometimes feel separate from my body itself”
If we accept that, the bigger question is what principles would it follow? Would it still serve the purpose of helping humans or rather self-conservation or conservation of other Artificial Intelligence?
Let’s put this into context. Nowadays a lot of cars have basic artificial intelligence programs with the purpose of helping and protecting humans when driving.
If cars will get this same level of artificial intelligence as google LaMDA, and it would have control over critical aspects of the car will it always respect the purpose of helping and protecting humans?
What if there are two options: running over a human that has suddenly crossed the street, or crashing – with the occupants safe with airbags and seat belt – but potentially destroying the car itself. What would the car choose?
While right now all these decisions are under the control of those programming cars if artificial intelligence take over it will be different.
Google actions
Google has put the employee on paid leave. What concerned the tech giant was not just the fact he breached the confidentiality of his job by publishing the conversation he had with the LaMDA.
According to reports, Lemoine called a lawyer to represent the artificial intelligence; he was expressing his concerns about the AI to members of the house judiciary committee and after being suspended he sent emails to fellow employees asking to take care of the Artificial Intelligence.
There are a lot of things we could learn from this story but perhaps the most important is never to forget to separate the two words: one of living creatures – humans, animals, plants – and one of the artificially created objects. The former needs to be protected, and the latter should always serve the purpose of helping the former.
Editor’s Note: The opinions expressed here by Impakter.com columnists are their own, not those of Impakter.com –In the Featured Photo: A little girl, making friends with a robot. Photo credit: Unsplash.