The inner workings of large language models remain opaque. Psychomatics offers a novel framework to bridge the gap between human and AI cognition.
Artificial intelligence (AI) has made remarkable advances, with large language models (LLMs) such as ChatGPT capable of feats that rival or even surpass human performance.
However, as these AI systems become increasingly sophisticated, a fundamental problem has emerged: we don’t fully understand how they work.
This lack of understanding presents a significant challenge for researchers, developers and society at large.
As Matthew Hutson noted in a recent Nature article, the opacity of AI systems raises concerns about their reliability, potential biases and the ethical implications of their widespread use.
Douglas Heaven echoed this sentiment in MIT Technology Review: “Large language models can do jaw-dropping things. But nobody knows exactly why.”
Knowledge gap
This knowledge gap not only hinders our ability to improve and refine AI systems but raises questions about their trustworthiness and how to responsibly integrate them into critical domains such as healthcare, finance and decision-making processes.
In response to this pressing need for greater insight into AI functioning, researchers have proposed a novel multidisciplinary framework called psychomatics.
Introduced in a recent paper by different European and American researchers, coordinated by the Humane Technology Lab (Catholic University of Sacred Heart, Milan), this approach aims to bridge the gap between artificial and biological intelligence by combining insights from cognitive science, linguistics, and computer science.
Psychomatics is derived from the fusion of “psychology” and “informatics,” and offers a comparative methodology to explore how LLMs acquire, learn, remember, and use information to produce their outputs.
By drawing parallels between AI systems and biological minds, this framework seeks to provide a deeper understanding of the similarities and differences in their cognitive processes.
The central question driving psychomatics is: “Is the process of language development and use different in humans and LLMs?”
By addressing this fundamental inquiry, researchers hope to gain valuable insights into the nature of language, cognition, and intelligence – both artificial and biological.
The psychomatics approach has already yielded several important insights into the distinctions between AI systems like LLMs and human cognition.
Learning and development
Humans acquire language through a gradual process of social, emotional, and linguistic interactions that begin in infancy and continue throughout life.
In contrast, LLMs are trained on vast, pre-existing datasets in a relatively short time frame. This fundamental difference in the learning process has significant implications for how each system understands and uses language.
The role of experience and embodiment
Human cognition is deeply rooted in physical embodiment and direct experiences with the world. Our understanding of language and concepts is shaped by our sensory perceptions, emotions, and interactions with our environment.
LLMs, lacking physical bodies and sensory experiences, rely solely on statistical patterns in their training data to approximate meaning.
Sources of meaning
For humans, language is just one of several sources of meaning. Our understanding of the world is also informed by direct experiences, emotions, and imagination.
LLMs, on the other hand, derive meaning exclusively from the linguistic patterns present in their training data. This limitation can lead to what researchers call “hallucinations” – instances whereas LLMs confidently produce incorrect or nonsensical information.
Intentionality and consciousness
Humans possess conscious intentions, self-awareness and the ability to make deliberate choices.
LLMs, while capable of producing coherent and seemingly purposeful responses, lack true consciousness or internal motivations. Their outputs are fundamentally reactive, based on patterns in their training data and the prompts they receive.
Creativity and novel meaning generation
Humans can use imagination and combine existing knowledge in unique ways to generate entirely new ideas and meanings.
LLMs, while adept at recombining existing information in impressive ways, are ultimately limited to the patterns present in their training data. They cannot truly create novel meanings in the way that humans can.
Contextual understanding and pragmatics
Humans excel at interpreting subtle contextual cues, understanding sarcasm and navigating complex social situations.
While LLMs can often produce contextually appropriate responses, they struggle with more nuanced aspects of communication, such as detecting sarcasm or understanding faux pas without explicit training in these areas.
Related Articles: The New AI Legislation in Europe: Good Enough? | How AI Can Help Us Spot Its Own Fakes | AI for Good: When AI Is the ‘Only Viable Solution’ | When You Can’t Trust Your Eyes Anymore | AI Is Set to Change Fertility Treatment Forever | Imagining an Ethical Place for AI in Environmental Governance | Can Artificial Intelligence Help Us Speak to Animals? | AI’s Threat to Humanity Rivals Pandemics and Nuclear War, Industry Leaders Warn | Governments’ Role in Regulating and Using AI for the SDGs | The Challenges Ahead for Generative AI
Assessing truth
Humans can rely on multiple sources of information, including direct experiences and critical thinking, to verify claims and assess the truth of statements.
LLMs attempt to determine truth based on the probability of scenarios within their training data, which can lead to confident assertions of incorrect information (hallucinations).
Potential benefits of psychomatics
By systematically comparing the cognitive processes of AI systems and biological minds, psychomatics offers several potential benefits.
First, understanding the fundamental differences between human and AI cognition can inform the development of more robust, reliable and potentially more human-like AI systems.
So, by providing a framework for analysing how LLMs process and generate information, psychomatics could help make AI systems more transparent and interpretable.
A deeper understanding of AI cognition can then inform discussions about the ethical implications of AI use and help develop more responsible deployment strategies.
Finally, bringing together experts from cognitive science, linguistics, and computer science, psychomatics fosters interdisciplinary collaboration and knowledge exchange.
While psychomatics offers a promising approach to understanding AI systems, significant challenges remain.
The sheer complexity of modern LLMs makes it difficult to fully unravel their inner workings. Additionally, as AI systems continue to evolve rapidly, frameworks like psychomatics will need to adapt to keep pace with new developments.
Exploring the AI potential
Future research in psychomatics may focus on developing more sophisticated comparative methodologies, exploring the potential for AI systems with greater embodied understanding and investigating ways to imbue AI with more human-like capabilities for generating novel meanings and understanding context.
As we continue to push the boundaries of artificial intelligence, frameworks such as psychomatics will play a crucial role in helping us understand these powerful but opaque systems.
By bridging the gap between artificial and biological cognition, we can work towards developing AI that is not only more capable but also more aligned with human values and understanding.
** **
This article was originally published by 360info™.
Editor’s Note: The opinions expressed here by Impakter.com columnists are their own, not those of Impakter.com — Cover Photo Credit: Geralt.