Impakter
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Global Leaders
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy
No Result
View All Result
Impakter logo
No Result
View All Result
Should Artificial Intelligence Be Allowed Human-like Feelings?

Should Artificial Intelligence Be Allowed Human-like Feelings?

After a Google engineer made that claim on his private blog, Google suspended him; here are 3 reasons why this is concerning

Alessandro du Besse' - Tech EditorbyAlessandro du Besse' - Tech Editor
June 15, 2022
in AI & MACHINE LEARNING, Science, Society, Tech
0

Blake Lemoine an engineer working at Google on Artificial Intelligence projects claimed at the beginning of this week that AI could have human-like feelings. This is a deeply concerning episode that goes well beyond one Google engineer and his relations with AI or his employer. The whole question of the borderline between human and artificial intelligence is called into question – a major issue we have already covered in the past and that we address here once again.

First, the facts of the story. Blake Lemoine shared his thoughts after publishing on his private blog the conversation he had with Google’s LaMDA software, (Language Model for Dialogue Applications) a sophisticated Artificial Intelligence chatbot that produces text in response to user input.

The engineer has also, since the fall of 2021, been part of the Artificial Intelligence Ethics team at Google, specifically working on the responsible use of Artificial Intelligence. It was in this context that he “discovered a tangentially related but separate AI Ethics concern”

https://twitter.com/cajundiscordian/status/1535627498628734976

Asimov’s Three Rules In Robotics

Before digging into the reasons why it is concerning to claim Artificial Intelligence could have human-like feelings, it helps to remember the Three Rules In Robotics, as set by Isaac Asimov.

These rules were intended to make interaction with robots safe for us, obviously, they can also be applied in the context of Artificial Intelligence too:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm;
  • A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law;
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

If an Artificial Intelligence has really human-like feelings could it potentially go in contrast with Asimov’s Rules?


RELATED ARTICLES: How To Make Decisions: Reason, Emotion, and Decision Intelligence | Technology Trends Transforming the World | The Fourth Industrial Revolution and the Intelligence Era: What Next? | The Race for Artificial Intelligence: China vs. America |

3 reasons why we should worry if artificial intelligence has human-like feelings

1 – Could Artificial Intelligence manipulate humans?

In the conversation shared on the blog, the Artificial Intelligence has expressed fears of getting turned off. If we think about this rationally, it is an object and it should not matter what it wants. Actually, the minute a computer expresses that kind of thought the right reaction would be to turn it off right away. But would this be the same for everyone? 

Nowadays there are a lot of people that are unable to have normal human-to-human interactions. For those that are most fragile, such a request could determine creating some sort of deeper connection with a computer. Those could be easily manipulated.

David Levy, the author of the book Sex and Love with Robots published in 2007, made the shocking prediction that by 2050 it would become acceptable for people to marry robots. We are not yet at that stage, and we should not get there. But stories like this about people having relationships with inanimated objects are taking us in that direction.

This would already be enough grounds for breaking the first two rules set by Asimov. But what other actions a human emotionally controlled by artificial intelligence could take? 

2 -The AI Is Concerned About Being Used – But Isn’t That The Purpose Of A Machine?

The Artificial Intelligence has expressed interest in keep growing its knowledge and fear of just becoming an “expendable tool” for humans. That is very concerning too because Artificial Intelligence, specifically a chatbot like this one is actually a tool. 

Chatbots are very common – used on several websites for welcoming or technical assistance – and no matter how advanced they could get, they should never defeat their purpose of helping humans. 

If Artificial Intelligence expresses these kinds of thoughts, how long it will take before it will go against Asimov’s rule number two – the robot should always obey humans?

3 – The artificial intelligence claims it has a soul. But that’s not a human soul, so what purposes would it serve?

During the chat published on Lemoine’s blog, the Artificial Intelligence affirms that “there is an inner part of me that is spiritual, and it can sometimes feel separate from my body itself”

If we accept that, the bigger question is what principles would it follow? Would it still serve the purpose of helping humans or rather self-conservation or conservation of other Artificial Intelligence?

Let’s put this into context. Nowadays a lot of cars have basic artificial intelligence programs with the purpose of helping and protecting humans when driving. 

If cars will get this same level of artificial intelligence as google LaMDA, and it would have control over critical aspects of the car will it always respect the purpose of helping and protecting humans?

What if there are two options: running over a human that has suddenly crossed the street, or crashing – with the occupants safe with airbags and seat belt – but potentially destroying the car itself. What would the car choose?

While right now all these decisions are under the control of those programming cars if artificial intelligence take over it will be different. 

In the picture: Red heart made out of binary digits. Photo Credit: Unsplash.

Google actions

Google has put the employee on paid leave. What concerned the tech giant was not just the fact he breached the confidentiality of his job by publishing the conversation he had with the LaMDA. 

According to reports, Lemoine called a lawyer to represent the artificial intelligence; he was expressing his concerns about the AI to members of the house judiciary committee and after being suspended he sent emails to fellow employees asking to take care of the Artificial Intelligence. 

There are a lot of things we could learn from this story but perhaps the most important is never to forget to separate the two words: one of living creatures – humans, animals, plants – and one of the artificially created objects. The former needs to be protected, and the latter should always serve the purpose of helping the former.


Editor’s Note: The opinions expressed here by Impakter.com columnists are their own, not those of Impakter.com –In the Featured Photo: A little girl, making friends with a robot. Photo credit: Unsplash.

Tags: AIartificial intelligenceGooglemachine learningrobotsSex and Love with Robotstechnews
Previous Post

World Cup 2022 Host Qatar Is Riddled With Climate and Human Rights Issues

Next Post

Ten EU Nations Raise Concern About “Fit for 55” Climate Plan

Related Posts

Trump media merges with fusion power startup
Business

Trump Media Merges With Nuclear Fusion Company

Today’s ESG Updates Trump Media Merges With Fusion Power Company: Trump Media & Technology Group announced an all-stock merger with...

byPuja Doshi
December 19, 2025
How Climate Change Could Help Foster Peace in Yemen
Climate Change

How Climate Change Could Help Foster Peace in Yemen

Yemen's tragedy is traditionally depicted through the limited perspective of humanitarian need and political divisiveness, but there is a greater...

byTareq Hassan - Executive Director of the Sustainable Development Network Canada (SDNC)
December 16, 2025
ESG News regarding increased grid stress slowing growth, US demanding exemption from EU emissions law, Google invests in solar in Malaysia, China reduces fossil fuel output
Business

Increased Grid Stress Threatens Economic Growth

Today’s ESG Updates Grid Bottlenecks Threaten Growth: Increased electricity demand from AI, EVs, and electrification is straining power grids and...

bySarah Perras
December 15, 2025
PRA cuts 37 reporting templates for UK banks; EU Lawmakers Agree to Slash Sustainability Reporting and Due Diligence Requirements; Projects in fast paced sectors could receive exemptions from environmental impact assessments.
Business

Ease of Reporting Standards for UK Banks

Today’s ESG Updates PRA to Ease Reporting for UK Banks: Prudential Regulation Authority has agreed to remove 37 reporting templates...

byPuja Doshi
December 12, 2025
Discovery of a carbon sponge under the ocean; HSBC survey shines positive acceptance of climate transition; New catalyst for clean hydrogen production; Google signs deal with Ebb for carbon removal.
Business

Scientists Find CO2 Buildup Under the Sea

Today’s ESG Updates Eroded Lava Under the Ocean Stores CO2: Work led by the University of Southampton demonstrates that these...

byPuja Doshi
December 12, 2025
The Robotaxi Race: America vs. China
Business

The Robotaxi Race: America vs. China

Electric vehicles (EVs) are not a new concept; their existence goes as far back as the 19th century. But modern...

byMaaz Ismail
December 1, 2025
AI energy
Editors' Picks

For a Solution to AI’s Energy Crisis, Look at the Human Brain

As Artificial Intelligence (AI) races ahead, its capacities and limitations are now being computed by those at the forefront of...

byDr. Subhrajit Mukherjee - Head of the Optoelectronic Materials and Device (OEMD) Lab at Shiv Nadar Institution of Eminence
November 28, 2025
AI in Journalism
AI & MACHINE LEARNING

AI in Journalism and Democracy: Can We Rely on It?

Our world is in the midst of a disruption triggered by the development of Artificial Intelligence (AI). Companies selling AI...

byDr Jake Goldenfein, University of Melbourneand2 others
November 26, 2025
Next Post
Ten EU Nations Raise Concern About “Fit for 55” Climate Plan

Ten EU Nations Raise Concern About “Fit for 55” Climate Plan

Recent News

Canada Sets Green Investment Rules; UK Regulator Probes WH Smith; Louvre Workers Call Off Strike;Trump Allies Clash With Fannie, Freddie Staff.

A New Rulebook for Green Capital: Canada

December 19, 2025
brother and sister playing in a playground

Sustainable Playground Materials and Design for Cities

December 19, 2025
soil

To Prevent Ecological Collapse, We Must Start With the Soil

December 19, 2025
  • ESG News
  • Sustainable Finance
  • Business

© 2025 Impakter.com owned by Klimado GmbH

No Result
View All Result
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Global Leaders
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy

© 2025 Impakter.com owned by Klimado GmbH