Impakter
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Global Leaders
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy
No Result
View All Result
Impakter logo
No Result
View All Result
Should Artificial Intelligence Be Allowed Human-like Feelings?

Should Artificial Intelligence Be Allowed Human-like Feelings?

After a Google engineer made that claim on his private blog, Google suspended him; here are 3 reasons why this is concerning

Alessandro du Besse' - Tech EditorbyAlessandro du Besse' - Tech Editor
June 15, 2022
in AI & MACHINE LEARNING, Science, Society, Tech
0

Blake Lemoine an engineer working at Google on Artificial Intelligence projects claimed at the beginning of this week that AI could have human-like feelings. This is a deeply concerning episode that goes well beyond one Google engineer and his relations with AI or his employer. The whole question of the borderline between human and artificial intelligence is called into question – a major issue we have already covered in the past and that we address here once again.

First, the facts of the story. Blake Lemoine shared his thoughts after publishing on his private blog the conversation he had with Google’s LaMDA software, (Language Model for Dialogue Applications) a sophisticated Artificial Intelligence chatbot that produces text in response to user input.

The engineer has also, since the fall of 2021, been part of the Artificial Intelligence Ethics team at Google, specifically working on the responsible use of Artificial Intelligence. It was in this context that he “discovered a tangentially related but separate AI Ethics concern”

https://twitter.com/cajundiscordian/status/1535627498628734976

Asimov’s Three Rules In Robotics

Before digging into the reasons why it is concerning to claim Artificial Intelligence could have human-like feelings, it helps to remember the Three Rules In Robotics, as set by Isaac Asimov.

These rules were intended to make interaction with robots safe for us, obviously, they can also be applied in the context of Artificial Intelligence too:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm;
  • A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law;
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

If an Artificial Intelligence has really human-like feelings could it potentially go in contrast with Asimov’s Rules?


RELATED ARTICLES: How To Make Decisions: Reason, Emotion, and Decision Intelligence | Technology Trends Transforming the World | The Fourth Industrial Revolution and the Intelligence Era: What Next? | The Race for Artificial Intelligence: China vs. America |

3 reasons why we should worry if artificial intelligence has human-like feelings

1 – Could Artificial Intelligence manipulate humans?

In the conversation shared on the blog, the Artificial Intelligence has expressed fears of getting turned off. If we think about this rationally, it is an object and it should not matter what it wants. Actually, the minute a computer expresses that kind of thought the right reaction would be to turn it off right away. But would this be the same for everyone? 

Nowadays there are a lot of people that are unable to have normal human-to-human interactions. For those that are most fragile, such a request could determine creating some sort of deeper connection with a computer. Those could be easily manipulated.

David Levy, the author of the book Sex and Love with Robots published in 2007, made the shocking prediction that by 2050 it would become acceptable for people to marry robots. We are not yet at that stage, and we should not get there. But stories like this about people having relationships with inanimated objects are taking us in that direction.

This would already be enough grounds for breaking the first two rules set by Asimov. But what other actions a human emotionally controlled by artificial intelligence could take? 

2 -The AI Is Concerned About Being Used – But Isn’t That The Purpose Of A Machine?

The Artificial Intelligence has expressed interest in keep growing its knowledge and fear of just becoming an “expendable tool” for humans. That is very concerning too because Artificial Intelligence, specifically a chatbot like this one is actually a tool. 

Chatbots are very common – used on several websites for welcoming or technical assistance – and no matter how advanced they could get, they should never defeat their purpose of helping humans. 

If Artificial Intelligence expresses these kinds of thoughts, how long it will take before it will go against Asimov’s rule number two – the robot should always obey humans?

3 – The artificial intelligence claims it has a soul. But that’s not a human soul, so what purposes would it serve?

During the chat published on Lemoine’s blog, the Artificial Intelligence affirms that “there is an inner part of me that is spiritual, and it can sometimes feel separate from my body itself”

If we accept that, the bigger question is what principles would it follow? Would it still serve the purpose of helping humans or rather self-conservation or conservation of other Artificial Intelligence?

Let’s put this into context. Nowadays a lot of cars have basic artificial intelligence programs with the purpose of helping and protecting humans when driving. 

If cars will get this same level of artificial intelligence as google LaMDA, and it would have control over critical aspects of the car will it always respect the purpose of helping and protecting humans?

What if there are two options: running over a human that has suddenly crossed the street, or crashing – with the occupants safe with airbags and seat belt – but potentially destroying the car itself. What would the car choose?

While right now all these decisions are under the control of those programming cars if artificial intelligence take over it will be different. 

In the picture: Red heart made out of binary digits. Photo Credit: Unsplash.

Google actions

Google has put the employee on paid leave. What concerned the tech giant was not just the fact he breached the confidentiality of his job by publishing the conversation he had with the LaMDA. 

According to reports, Lemoine called a lawyer to represent the artificial intelligence; he was expressing his concerns about the AI to members of the house judiciary committee and after being suspended he sent emails to fellow employees asking to take care of the Artificial Intelligence. 

There are a lot of things we could learn from this story but perhaps the most important is never to forget to separate the two words: one of living creatures – humans, animals, plants – and one of the artificially created objects. The former needs to be protected, and the latter should always serve the purpose of helping the former.


Editor’s Note: The opinions expressed here by Impakter.com columnists are their own, not those of Impakter.com –In the Featured Photo: A little girl, making friends with a robot. Photo credit: Unsplash.

Tags: AIartificial intelligenceGooglemachine learningrobotsSex and Love with Robotstechnews
Previous Post

World Cup 2022 Host Qatar Is Riddled With Climate and Human Rights Issues

Next Post

Ten EU Nations Raise Concern About “Fit for 55” Climate Plan

Related Posts

The Growing Role of AI in Business Decision-Making
Business

The Growing Role of AI in Business Decision-Making

When corporate executives arrive at Dubai on their flights, they make scores of decisions before their aircraft has a chance...

byHannah Fischer-Lauder
January 26, 2026
ESG News regarding Equinor wind farm project to resume in US, Blue Earth Capital raises $100 million, Google Signs Major 1.2 GW Carbon-Free Energy Deal, and US to Finalize 2026 Biofuel Quotas by March
Business

U.S. Court Clears Equinor to Resume $5B Wind Project Halted by Trump

Today’s ESG Updates Court Clears Equinor to Resume $5B Wind Project: A federal judge overturned Trump’s suspension order, allowing Equinor...

byEge Can Alparslan
January 16, 2026
Billionaires Became Richer Than Ever in 2025: Who Are They and What Drove Their Wealth Growth
AI & MACHINE LEARNING

Billionaires Became Richer Than Ever in 2025: Who Are They and What Drove Their Wealth Growth

In 2025, the world’s 500 richest people increased their net worth by $2.2 trillion. Of those 500 individuals, eight billionaires...

bySarah Perras
January 14, 2026
ESG News regarding China restricting industrial renewable exports, UN warning that US climate treaty exit harms economy, UK firms lowering wage forecasts despite inflation, Meta partnering with TerraPower for new nuclear reactors.
Business

To Save the Grid, China Forces Industries to Go Off-Network

Today’s ESG Updates China Limits Grid Exports for New Industrial Solar & Wind: China is encouraging companies to store green...

byEge Can Alparslan
January 9, 2026
Is AI Hype in Drug Development About to Turn Into Reality?
AI & MACHINE LEARNING

Is AI Hype in Drug Development About to Turn Into Reality?

The world of drug discovery, long characterised by years of painstaking trial-and-error, is undergoing a seismic transformation. Recent research led...

byDr Nidhi Malhotra - Assistant Professor at the Shiv Nadar Institution of Eminence
January 8, 2026
AI data centres
AI & MACHINE LEARNING

The Cloud We Live In

How AI data centres affect clean energy and water security As the holiday season begins, many of us are engaging...

byAriq Haidar
December 24, 2025
A crowded airport terminal with travelers moving through check-in areas during the holiday season.
AI & MACHINE LEARNING

How AI Is Helping Christmas Run More Smoothly

Christmas this year will look familiar on the surface. Gifts will arrive on time, supermarkets will stay stocked, airports will...

byJana Deghidy
December 22, 2025
Trump media merges with fusion power startup
Business

Trump Media Merges With Nuclear Fusion Company

Today’s ESG Updates Trump Media Merges With Fusion Power Company: Trump Media & Technology Group announced an all-stock merger with...

byPuja Doshi
December 19, 2025
Next Post
Ten EU Nations Raise Concern About “Fit for 55” Climate Plan

Ten EU Nations Raise Concern About “Fit for 55” Climate Plan

Recent News

ESG News on India lithium and nickel processing incentives and EV battery supply chains

India Plans Incentives for Lithium and Nickel Processing

January 29, 2026
THC Testing

The Basics of THC Testing At Home

January 29, 2026
Driving with the correct Off-Road Wheels

Top 15 Off-Road Wheels

January 29, 2026
  • ESG News
  • Sustainable Finance
  • Business

© 2025 Impakter.com owned by Klimado GmbH

No Result
View All Result
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Global Leaders
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy

© 2025 Impakter.com owned by Klimado GmbH