The paper investigates whether cognitive systems, particularly large language models (LLMs), possess subjective consciousness or true understanding. It argues that while LLMs can exhibit impressive language processing abilities, they do not have subjective consciousness due to the absence of a central subject within their architecture, as demonstrated through an adapted version of Searle’s Chinese Room thought experiment. The study emphasizes that understanding in LLMs is asubjective and distinct from human understanding, raising questions about the nature of consciousness and understanding in cognitive systems.
Related topics: