Impakter
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy
No Result
View All Result
Impakter logo
No Result
View All Result
Should Artificial Intelligence Be Allowed Human-like Feelings?

Should Artificial Intelligence Be Allowed Human-like Feelings?

After a Google engineer made that claim on his private blog, Google suspended him; here are 3 reasons why this is concerning

Alessandro du Besse' - Tech EditorbyAlessandro du Besse' - Tech Editor
June 15, 2022
in AI & MACHINE LEARNING, Science, Society, Tech
0

Blake Lemoine an engineer working at Google on Artificial Intelligence projects claimed at the beginning of this week that AI could have human-like feelings. This is a deeply concerning episode that goes well beyond one Google engineer and his relations with AI or his employer. The whole question of the borderline between human and artificial intelligence is called into question – a major issue we have already covered in the past and that we address here once again.

First, the facts of the story. Blake Lemoine shared his thoughts after publishing on his private blog the conversation he had with Google’s LaMDA software, (Language Model for Dialogue Applications) a sophisticated Artificial Intelligence chatbot that produces text in response to user input.

The engineer has also, since the fall of 2021, been part of the Artificial Intelligence Ethics team at Google, specifically working on the responsible use of Artificial Intelligence. It was in this context that he “discovered a tangentially related but separate AI Ethics concern”

https://twitter.com/cajundiscordian/status/1535627498628734976

Asimov’s Three Rules In Robotics

Before digging into the reasons why it is concerning to claim Artificial Intelligence could have human-like feelings, it helps to remember the Three Rules In Robotics, as set by Isaac Asimov.

These rules were intended to make interaction with robots safe for us, obviously, they can also be applied in the context of Artificial Intelligence too:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm;
  • A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law;
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

If an Artificial Intelligence has really human-like feelings could it potentially go in contrast with Asimov’s Rules?


RELATED ARTICLES: How To Make Decisions: Reason, Emotion, and Decision Intelligence | Technology Trends Transforming the World | The Fourth Industrial Revolution and the Intelligence Era: What Next? | The Race for Artificial Intelligence: China vs. America |

3 reasons why we should worry if artificial intelligence has human-like feelings

1 – Could Artificial Intelligence manipulate humans?

In the conversation shared on the blog, the Artificial Intelligence has expressed fears of getting turned off. If we think about this rationally, it is an object and it should not matter what it wants. Actually, the minute a computer expresses that kind of thought the right reaction would be to turn it off right away. But would this be the same for everyone? 

Nowadays there are a lot of people that are unable to have normal human-to-human interactions. For those that are most fragile, such a request could determine creating some sort of deeper connection with a computer. Those could be easily manipulated.

David Levy, the author of the book Sex and Love with Robots published in 2007, made the shocking prediction that by 2050 it would become acceptable for people to marry robots. We are not yet at that stage, and we should not get there. But stories like this about people having relationships with inanimated objects are taking us in that direction.

This would already be enough grounds for breaking the first two rules set by Asimov. But what other actions a human emotionally controlled by artificial intelligence could take? 

2 -The AI Is Concerned About Being Used – But Isn’t That The Purpose Of A Machine?

The Artificial Intelligence has expressed interest in keep growing its knowledge and fear of just becoming an “expendable tool” for humans. That is very concerning too because Artificial Intelligence, specifically a chatbot like this one is actually a tool. 

Chatbots are very common – used on several websites for welcoming or technical assistance – and no matter how advanced they could get, they should never defeat their purpose of helping humans. 

If Artificial Intelligence expresses these kinds of thoughts, how long it will take before it will go against Asimov’s rule number two – the robot should always obey humans?

3 – The artificial intelligence claims it has a soul. But that’s not a human soul, so what purposes would it serve?

During the chat published on Lemoine’s blog, the Artificial Intelligence affirms that “there is an inner part of me that is spiritual, and it can sometimes feel separate from my body itself”

If we accept that, the bigger question is what principles would it follow? Would it still serve the purpose of helping humans or rather self-conservation or conservation of other Artificial Intelligence?

Let’s put this into context. Nowadays a lot of cars have basic artificial intelligence programs with the purpose of helping and protecting humans when driving. 

If cars will get this same level of artificial intelligence as google LaMDA, and it would have control over critical aspects of the car will it always respect the purpose of helping and protecting humans?

What if there are two options: running over a human that has suddenly crossed the street, or crashing – with the occupants safe with airbags and seat belt – but potentially destroying the car itself. What would the car choose?

While right now all these decisions are under the control of those programming cars if artificial intelligence take over it will be different. 

In the picture: Red heart made out of binary digits. Photo Credit: Unsplash.

Google actions

Google has put the employee on paid leave. What concerned the tech giant was not just the fact he breached the confidentiality of his job by publishing the conversation he had with the LaMDA. 

According to reports, Lemoine called a lawyer to represent the artificial intelligence; he was expressing his concerns about the AI to members of the house judiciary committee and after being suspended he sent emails to fellow employees asking to take care of the Artificial Intelligence. 

There are a lot of things we could learn from this story but perhaps the most important is never to forget to separate the two words: one of living creatures – humans, animals, plants – and one of the artificially created objects. The former needs to be protected, and the latter should always serve the purpose of helping the former.


Editor’s Note: The opinions expressed here by Impakter.com columnists are their own, not those of Impakter.com –In the Featured Photo: A little girl, making friends with a robot. Photo credit: Unsplash.

Tags: AIartificial intelligenceGooglemachine learningrobotsSex and Love with Robotstechnews
Previous Post

World Cup 2022 Host Qatar Is Riddled With Climate and Human Rights Issues

Next Post

Ten EU Nations Raise Concern About “Fit for 55” Climate Plan

Related Posts

If AI Steals Our Jobs, Who’ll Be Left to Buy Stuff?
AI & MACHINE LEARNING

If AI Steals Our Jobs, Who’ll Be Left to Buy Stuff?

The question is one of increasing urgency: What will workers do when technology does most of the work? In April...

byDr Manoj Pant - Former Vice-Chancellor of the Indian Institute of Foreign Trade & Visiting Professor at the Shiv Nadar Institution of Eminenceand1 others
February 18, 2026
ESG News regarding Trump criticizing Newsom over UK green energy agreement, new analysis questioning the climate benefits of AI, EU greenlighting €1.04 billion Danish programme to reduce farm emissions and restore wetlands, and Santos winning court case over alleged misleading net-zero claims.
Business

Trump Slams Newsom Over UK Green Energy Deal

Today’s ESG Updates: Trump Slams Newsom’s UK Green Deal: Criticizes California governor for signing a clean energy agreement with the...

byAnastasiia Barmotina
February 17, 2026
ESG News regarding Tehran Dispatches Technical Team for Renewed Nuclear Dialogue; Italy Proposes Temporary Sea Entry Bans; Labour Market Slowdown in UK; India Hosts Global Tech Leaders in AI Investment Push
Business

Iran-US Nuclear Diplomacy Returns to Geneva

Today’s ESG Updates Switzerland Maintains Intermediary Role in U.S. - Iran Contacts: Iran’s Foreign Minister Abbas Araghchi arrives in Geneva...

byPuja Doshi
February 16, 2026
REAIM speaker stands in front of an image that reads "Real or fake?"
AI & MACHINE LEARNING

Deepfake Fraud Goes Mainstream

We have all done it. We’ve all seen a video on the internet, maybe a cute video of a cat...

bySarah Perras
February 13, 2026
An abstract robotic figure is surrounded by glowing lines
AI & MACHINE LEARNING

Moltbook: Should We Be Concerned About the First AI-Only Social Network?

Introducing Moltbook, a social media platform for AI bots. No, this isn’t the plot of a Black Mirror episode on...

bySarah Perras
February 3, 2026
ESG News regarding AI datacenters fueling U.S.-led gas power boom, Lukoil selling foreign holdings, England and Wales households paying more for water bills, and Trafigura investing $1 billion in African carbon removal projects.
Business

AI Datacenters Fuel U.S.-Led Gas Power Boom

Today’s ESG Updates U.S.-Led Gas Boom Threatens Climate: Global Energy Monitor reports 2026 could see record new gas plants, many...

byAnastasiia Barmotina
January 30, 2026
The Growing Role of AI in Business Decision-Making
Business

The Growing Role of AI in Business Decision-Making

When corporate executives arrive at Dubai on their flights, they make scores of decisions before their aircraft has a chance...

byHannah Fischer-Lauder
January 26, 2026
ESG News regarding Equinor wind farm project to resume in US, Blue Earth Capital raises $100 million, Google Signs Major 1.2 GW Carbon-Free Energy Deal, and US to Finalize 2026 Biofuel Quotas by March
Business

U.S. Court Clears Equinor to Resume $5B Wind Project Halted by Trump

Today’s ESG Updates Court Clears Equinor to Resume $5B Wind Project: A federal judge overturned Trump’s suspension order, allowing Equinor...

byEge Can Alparslan
January 16, 2026
Next Post
Ten EU Nations Raise Concern About “Fit for 55” Climate Plan

Ten EU Nations Raise Concern About “Fit for 55” Climate Plan

Recent News

ESG news regarding Chris Wright warning IEA, Alcoa paying A$55 million for illegal bauxite mining in Western Australia, GEAPP raising $100 million to digitise India’s electricity grids, and U.S. and Japan unveiling $36 billion energy and minerals investment plan.

U.S. Threatens IEA Withdrawal Over Renewable Energy Focus

February 18, 2026
Trump’s Board of Peace Can Provide a New Opportunity for the United Nations

Trump’s Board of Peace Can Provide a New Opportunity for the United Nations

February 18, 2026
Migration Policy in Europe: Greece and Spain Take Divergent Paths

Migration Policy in Europe: Greece and Spain Take Divergent Paths

February 18, 2026
  • ESG News
  • Sustainable Finance
  • Business

© 2025 Impakter.com owned by Klimado GmbH

No Result
View All Result
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy

© 2025 Impakter.com owned by Klimado GmbH