Impakter
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy
No Result
View All Result
Impakter logo
No Result
View All Result
AI thinking

AI Is Changing the Way We Think

A new study explores how AI is integrated into our thinking, creating a personalised cognitive layer that shapes decision-making and impacts human agency

Giuseppe Riva - Director of Humane Technology Lab at the Catholic University of MilanMassimo Chiriatti - Chief Technical & Innovation Officer at LenovobyGiuseppe Riva - Director of Humane Technology Lab at the Catholic University of MilanandMassimo Chiriatti - Chief Technical & Innovation Officer at Lenovo
November 12, 2024
in AI & MACHINE LEARNING
0

It is becoming more common for people to check their phone’s weather app before deciding what to wear, use Google Maps to navigate and ask ChatGPT to draft an email. These everyday AI interactions are becoming so seamless that they now extend our cognitive capabilities beyond natural limits.

This phenomenon, identified by researchers from the Catholic University of Sacred Heart, Milan, in a new multidisciplinary study published by Nature Human Behavior, has been termed “System 0.”

Understanding “System 0”

Exploring how AI systems work is essential, as their influence extends human cognitive capabilities but lacks moral agency. This imbalance becomes problematic with personalization, which can confine users within “filter bubbles” that limit critical thinking and may erode independent judgement. The opacity of AI algorithms further complicates matters.

A team of researchers with expertise in AI, human thinking, neuroscience, human interactions and philosophy describe “System 0” as an autonomous AI layer increasingly embedded in our thinking and decision-making.

To understand System 0, it’s essential to consider how minds work. Psychologists like Daniel Kahneman describe human thought as having two systems: System 1 (fast, intuitive) and System 2 (slow, analytical).

System 1 handles routine tasks, like recognizing faces or driving a familiar route, while System 2 tackles complex problems, like solving a maths equation or planning a trip.

How “System 0” adapts to us

System 0, however, introduces a new layer feeding data to both systems, adapting to personal habits.

When choosing a restaurant, for instance, System 1 may respond to photos on Yelp while System 2 assesses reviews and prices – but both interact with AI-tailored recommendations based on dining preferences, budget constraints, and past choices.

The AI doesn’t just present generic recommendations; it creates a personalised information landscape based on your history of interactions.

Similarly, when navigating with Google Maps, users’ instincts and planning rely on AI-processed data. This data also incorporates travel habits, creating personalised, predictive guidance. System 0 not only processes real-time data but also leverages users’ historical preferences. However, integrating AI into cognitive processes raises documented challenges.

System 0 maintains a persistent memory of choices, behaviours, and preferences across multiple domains, creating what researchers call a “cognitive shadow” – an AI-driven layer that not only tracks current actions but also retains past choices.

The ethics of AI in decision-making

Yet, the implications of this memory-enhanced cognitive layer are profound. System 0’s memory-based design means it doesn’t just assist with current data but is informed by an AI memory of past behaviour. This poses critical questions about autonomy and privacy, as decisions are increasingly influenced by digital insights into personal patterns. Moreover, unlike human cognition, System 0 functions differently.

As Pearl and Mackenzie discuss in “The Book of Why,” AI can recognize patterns in data but struggles with causality – a fundamental part of human reasoning. This limitation means that while AI can identify patterns, it may miss the underlying causal relationships that humans naturally grasp.

The difference brings concerns about AI’s role in our decision-making.AI lacks true semantic understanding, as researchers note, even as it produces responses that resemble human thought.


Related Articles: How AI Could Influence US Voters | Psychomatics: A New Frontier in Understanding AI | AI for Good: When AI Is the ‘Only Viable Solution’ | When You Can’t Trust Your Eyes Anymore

The “black box” problem

The increasing reliance on System 0 into human cognitive processes raises fundamental concerns about human autonomy and trust that researchers have extensively studied. Taddeo and Floridi (2011) introduce the concept of “e-trust” – a unique trust dynamic where AI wields influence without possessing moral agency.

This asymmetry becomes problematic with AI personalization, which narrows our information exposure, potentially confining users within “filter bubbles” that limit critical thinking.

Users often accept AI suggestions even when they conflict with personal preferences, a tendency termed “algorithmic homogenization,” which may erode independent judgement.

The opacity of AI algorithms compounds this issue. With complex AI processes often hidden from users, understanding AI conclusions becomes difficult, affecting our capacity for informed judgement.

This “black box” problem raises serious concerns about accountability and human agency in AI-assisted decision-making: as System 0 integrates more deeply into our decision-making processes, the distribution of responsibility becomes increasingly unclear. When decisions are made through human-AI collaboration, it creates a “responsibility gap” where neither humans nor AI systems can be held fully accountable for outcomes.

Perhaps most importantly, the integration of System 0 impacts human fundamental nature as thinking beings. Although System 0 augments cognitive abilities, it risks reducing human agency.

Balancing AI’s potential with human autonomy

As people rely on AI for decision support, they may lose vital opportunities to hone cognitive skills.

This could lead to a problematic form of cognitive offloading, where dependency on external systems undermines intellectual growth. For these reasons, designing AI that supports autonomy without overstepping is crucial for maintaining this powerful human-AI partnership.

As AI evolves, understanding and shaping our relationship with System 0 will become increasingly crucial. The challenge lies in harnessing its potential while preserving human qualities – the ability to create meaning, exercise judgement, and maintain our intellectual independence.

** **

This article was originally published by 360info™.


Editor’s Note: The opinions expressed here by the authors are their own, not those of Impakter.com — In the Cover Photo: How ChatGPT visualizes itself. Cover Photo Credit: Wikimedia Commons. 

Tags: AIAI algorithmsartificial intelligenceCatholic University of Sacred HeartChatGPThuman thinkingneuroscienceSystem 0thinking
Previous Post

Climate Action in the EU: 61% of Companies Invest in Sustainability

Next Post

Misleading ESG Statements: Invesco Fined $17.5m by the SEC

Related Posts

If AI Steals Our Jobs, Who’ll Be Left to Buy Stuff?
AI & MACHINE LEARNING

If AI Steals Our Jobs, Who’ll Be Left to Buy Stuff?

The question is one of increasing urgency: What will workers do when technology does most of the work? In April...

byDr Manoj Pant - Former Vice-Chancellor of the Indian Institute of Foreign Trade & Visiting Professor at the Shiv Nadar Institution of Eminenceand1 others
February 18, 2026
ESG News regarding Trump criticizing Newsom over UK green energy agreement, new analysis questioning the climate benefits of AI, EU greenlighting €1.04 billion Danish programme to reduce farm emissions and restore wetlands, and Santos winning court case over alleged misleading net-zero claims.
Business

Trump Slams Newsom Over UK Green Energy Deal

Today’s ESG Updates: Trump Slams Newsom’s UK Green Deal: Criticizes California governor for signing a clean energy agreement with the...

byAnastasiia Barmotina
February 17, 2026
ESG News regarding Tehran Dispatches Technical Team for Renewed Nuclear Dialogue; Italy Proposes Temporary Sea Entry Bans; Labour Market Slowdown in UK; India Hosts Global Tech Leaders in AI Investment Push
Business

Iran-US Nuclear Diplomacy Returns to Geneva

Today’s ESG Updates Switzerland Maintains Intermediary Role in U.S. - Iran Contacts: Iran’s Foreign Minister Abbas Araghchi arrives in Geneva...

byPuja Doshi
February 16, 2026
REAIM speaker stands in front of an image that reads "Real or fake?"
AI & MACHINE LEARNING

Deepfake Fraud Goes Mainstream

We have all done it. We’ve all seen a video on the internet, maybe a cute video of a cat...

bySarah Perras
February 13, 2026
An abstract robotic figure is surrounded by glowing lines
AI & MACHINE LEARNING

Moltbook: Should We Be Concerned About the First AI-Only Social Network?

Introducing Moltbook, a social media platform for AI bots. No, this isn’t the plot of a Black Mirror episode on...

bySarah Perras
February 3, 2026
ESG News regarding AI datacenters fueling U.S.-led gas power boom, Lukoil selling foreign holdings, England and Wales households paying more for water bills, and Trafigura investing $1 billion in African carbon removal projects.
Business

AI Datacenters Fuel U.S.-Led Gas Power Boom

Today’s ESG Updates U.S.-Led Gas Boom Threatens Climate: Global Energy Monitor reports 2026 could see record new gas plants, many...

byAnastasiia Barmotina
January 30, 2026
The Growing Role of AI in Business Decision-Making
Business

The Growing Role of AI in Business Decision-Making

When corporate executives arrive at Dubai on their flights, they make scores of decisions before their aircraft has a chance...

byHannah Fischer-Lauder
January 26, 2026
Billionaires Became Richer Than Ever in 2025: Who Are They and What Drove Their Wealth Growth
AI & MACHINE LEARNING

Billionaires Became Richer Than Ever in 2025: Who Are They and What Drove Their Wealth Growth

In 2025, the world’s 500 richest people increased their net worth by $2.2 trillion. Of those 500 individuals, eight billionaires...

bySarah Perras
January 14, 2026
Next Post
ESG news regarding SEC fines Invesco, AI’s role in streamlining CSRD reporting, Spike in aviation emissions, Accenture net zero report.

Misleading ESG Statements: Invesco Fined $17.5m by the SEC

Recent News

A woman starting her Insurance agent Skill Development program.

Blended Learning Approaches for Insurance Agent Skill Development

February 19, 2026
Northern Kenya drought and hunger crisis affecting pastoral communities

Northern Kenya Drought and Hunger Crisis Worsens Amid Aid Cuts

February 19, 2026
Farewell to Soft Power

Farewell to Soft Power

February 19, 2026
  • ESG News
  • Sustainable Finance
  • Business

© 2025 Impakter.com owned by Klimado GmbH

No Result
View All Result
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy

© 2025 Impakter.com owned by Klimado GmbH