AI and the Death of Uncertainty: Risk of Premature Cognitive Closure
By: Dr. Ivan Del Valle - Published: June 24th, 2025
Abstract
Artificial intelligence (AI) systems are increasingly providing immediate, authoritative answers to users’ questions – a phenomenon that some argue heralds the “death of uncertainty.” This paper explores how the instant certainty offered by AI tools may hinder cognitive development and flexibility. Drawing on neuropsychology and psychoneuroendocrinoimmunology (PNEI) perspectives, we examine the potential reduction of curiosity, intellectual humility, and epistemic flexibility caused by overreliance on AI-generated answers. A thorough theoretical overview outlines AI’s influence on cognition, including the offloading of memory and critical thinking to machines and the resultant risks of premature cognitive closure. Insights from neuropsychology and PNEI highlight the role of uncertainty and curiosity in healthy cognitive growth, neural plasticity, and stress resilience. We review empirical evidence and case studies illustrating changes in human behavior and neurocognitive patterns linked to frequent AI interaction – from “digital amnesia” in everyday smartphone users to shifts in student learning behavior with AI chatbots. Broader societal and developmental risks are discussed, such as increased cognitive rigidity, reduced problem-solving skills, and vulnerability to biases when human thinking becomes dogmatically shaped by machine-provided answers. The findings underscore an urgent need for strategies to maintain epistemic humility and cognitive flexibility in an AI-saturated world. We conclude with recommendations for educational and societal interventions to ensure that embracing AI does not come at the cost of curiosity, critical thinking, and resilience.
Keywords: cognitive offloading; curiosity; critical thinking; cognitive flexibility; uncertainty; artificial intelligence; psychoneuroendocrinoimmunology; digital amnesia; resilience; need for closure
Introduction
In an age where information is instantly at our fingertips, uncertainty is increasingly treated as an inconvenience to be eliminated. Modern artificial intelligence (AI) tools – from search engines to advanced conversational agents – can provide seemingly definitive answers within seconds, promising relief from the discomfort of not knowing. Such instant certainty, however, may come with hidden costs to human cognition. Scholars and commentators have begun warning of a “death of uncertainty” in which the perpetual availability of AI-generated answers dampens our natural curiosity and critical thinking. The concern is that by constantly avoiding uncertainty and ambiguity, individuals may experience a form of premature cognitive closure, accepting AI outputs as final truths without the healthy process of questioning and exploration. This paper examines the implications of this phenomenon and asks: Does the convenience of AI-provided certainty hinder cognitive development and flexibility?
The allure of certain answers is deeply rooted in human psychology. Social-cognitive research on the need for cognitive closure (NFCC) demonstrates that many individuals have a tendency to seek quick, clear conclusions and avoid the discomfort of ambiguity. High NFCC is associated with ambiguity aversion and cognitive rigidity, as people with this disposition prefer familiar, definite answers over open-ended inquiry. AI systems cater directly to this tendency by delivering immediate solutions to questions in everyday life. Whether asking a smartphone voice assistant for a trivial fact or relying on an AI chatbot for a complex explanation, users are often rewarded with confident, authoritative responses. There is growing concern that this on-demand certainty may short-circuit the processes that normally occur when a person grapples with a problem or an unknown. Rather than tolerating uncertainty long enough to investigate and reason through an answer, we outsource the struggle to machines – an act of cognitive offloading that could undermine the development of problem-solving skills and epistemic humility.
Beyond anecdotal worries, emerging scientific evidence suggests that overreliance on AI and digital information sources can indeed affect cognitive functions. Early studies on the so-called “Google effect” found that easy access to online information reduced people’s tendency to remember facts, since they knew answers could be retrieved externally. Researchers describe this as digital amnesia – a growing dependence on devices as memory crutches, leading to the atrophy of internal memory retention. Similarly, experiments show that integrating AI into educational settings can yield mixed outcomes: while AI tutors can boost performance on certain tasks, students may become less engaged in deep critical thinking when answers are readily provided. A theoretical paradox is at play: AI has the potential to enhance learning through personalization and instant feedback, yet it may also induce cognitive dependency, where users accept solutions passively without scrutiny. Over time, this dependency could foster a habit of premature closure, wherein individuals stop exploring alternative viewpoints or solutions once the AI delivers an answer.
The implications of these trends extend into neuropsychology and even psychoneuroendocrinoimmunology (PNEI) – interdisciplinary fields that examine how psychological factors impact the brain, hormonal systems, and immune function. From a neuropsychological perspective, uncertainty and curiosity are not just abstract concepts; they are tied to concrete neural processes that drive learning and adaptation. The brain’s reward circuitry is known to engage when we are curious or faced with a challenging question, releasing neurotransmitters like dopamine that enhance memory and motivation. If AI removes the opportunity for this curiosity-driven neural reward, what effect might that have on cognitive development, especially in young people whose brains are still maturing? In parallel, PNEI research highlights how experiencing and overcoming manageable levels of stress – including the stress of uncertainty – can build resilience in our neural, endocrine, and immune systems. The discomfort of not knowing might actually be a crucible in which cognitive flexibility and stress-coping mechanisms are strengthened. By contrast, a life of instant answers and minimal intellectual challenge could leave individuals ill-equipped to handle unpredictable, high-stakes situations that inevitably arise.
This paper aims to synthesize current knowledge and recent findings on these issues, restricted to evidence from the last five years to capture the state-of-the-art understanding. We begin with a theoretical overview of AI’s influence on cognition, defining key concepts such as cognitive offloading, epistemic humility, and need for closure in the context of AI usage. We then delve into insights from neuropsychology and PNEI regarding uncertainty, cognitive growth, and resilience – elucidating why the experience of uncertainty is considered valuable for brain development and mental health. Next, we present empirical evidence and case studies that illustrate tangible changes in human behavior and neurocognitive patterns linked to interactions with AI and digital information technologies. These case studies range from the phenomenon of digital amnesia in everyday life to experimental research on AI in educational settings and its impact on memory, critical thinking, and creativity. Finally, we discuss the broader societal and developmental risks that may arise from an overreliance on AI-generated certainty, including the potential for increased cognitive rigidity, loss of creativity, and susceptibility to misinformation or biased thinking. Throughout, we underscore the importance of maintaining curiosity, critical inquiry, and uncertainty tolerance as safeguards of healthy cognition in an AI-driven world.
By integrating findings across cognitive psychology, neuroscience, and PNEI, this work seeks to provide a comprehensive understanding of how AI might inadvertently foster a form of intellectual complacency or rigidity – a phenomenon we term “premature cognitive closure.” The goal is not to argue that AI is inherently detrimental to the mind, but to illuminate the conditions under which its use can undermine cognitive flexibility and to highlight strategies for mitigating these risks. As AI tools become ever more embedded in daily life, such insight is crucial for educators, policymakers, parents, and users alike to ensure that the next generation retains the curiosity, humility, and resilience that come from engaging with uncertainty, rather than avoiding it.
Literature Review
AI’s Influence on Cognition: Offloading, Certainty, and Critical Thinking
The advent of AI technologies has revolutionized how humans obtain and process information, raising important questions about resulting cognitive trade-offs. On one hand, AI tools offer clear benefits: they can enhance learning and decision-making efficiency by providing rapid access to information and personalized feedback. For example, intelligent tutoring systems and adaptive learning platforms can tailor instruction to individual needs and deliver instant hints or corrections, which has been shown to support skill acquisition and knowledge retention under certain conditions. Such tools leverage AI’s strength in information retrieval and pattern recognition to augment human cognition. On the other hand, a growing body of evidence suggests that heavy reliance on AI for cognitive tasks leads to cognitive offloading, where individuals delegate mental processes to the technology and engage less in active thinking themselves. Cognitive offloading can reduce the user’s cognitive effort in the short term, but over time it may result in diminished practice of crucial thinking skills.
One key area of concern is critical thinking – the ability to analyze, evaluate, and synthesize information independently. Critical thinking is considered fundamental for academic success, problem-solving, and informed citizenship. AI’s influence on this skill is double-edged. While AI systems can support critical thinking by offering tools for simulating scenarios or visualizing complex data, they can also inadvertently erode critical thinking if users become overly dependent on AI outputs without reflection. Recent research by Gerlich (2025) provides empirical support for this erosion: in a mixed-method study with over 600 participants, frequent AI tool use was found to negatively correlate with critical thinking ability, largely due to increased cognitive offloading. Participants who habitually turned to AI for answers scored lower on tests of evaluating and reasoning, especially younger users who grew up with ubiquitous AI access. Qualitative interviews from the same study revealed a telling pattern – many users described a “lazy” reliance on AI recommendations for tasks ranging from simple fact-checking to complex decisions, admitting that they no longer felt the need to double-check or deeply understand information as long as the AI gave an answer. This complacency illustrates how AI can encourage a shallower processing style, where the presence of an authoritative answer reduces the incentive to critically engage with content.
The phenomenon underpinning these observations is often described as cognitive offloading. Cognitive offloading refers to using external aids (like devices or AI systems) to carry out mental tasks that one would otherwise perform internally. Classic examples include relying on a calculator for arithmetic or using GPS for navigation instead of one’s own spatial memory. AI extends this concept further by becoming an all-purpose oracle for knowledge and even reasoning. Research by Risko and Gilbert (2016) – though predating recent AI advances – laid the groundwork by showing that people naturally offload memory and problem-solving to readily available tools, sometimes at the cost of retaining skills and information. Sparrow et al. famously demonstrated that the mere availability of internet search influences how people remember facts: we tend to recall where to find answers (e.g. which keywords or websites) rather than the answers themselves. In modern terms, many now experience this as a form of digital amnesia, expecting that “the internet will remember” for them. AI chatbots and voice assistants, with their conversational interfaces and confident delivery, likely amplify this effect – why memorize or work through a problem when Alexa or ChatGPT can provide the solution on demand?
Overreliance on AI thus risks creating a generation of users who know how to find answers, but not how to derive or verify them. This dynamic can lead to premature cognitive closure, where individuals accept the first acceptable answer provided by an AI and terminate further inquiry. Psychologically, this ties into the need for cognitive closure, a trait describing one’s desire for a firm answer and aversion to ambiguity. AI’s promise of immediate certainty caters to high-NFCC individuals by satisfying their urge to resolve uncertainty quickly. However, indulging this urge habitually can stunt epistemic flexibility – the capacity to keep an open mind and entertain multiple possibilities. In educational psychology, it is well known that true critical thinking flourishes in an environment where students grapple with open-ended questions and conflicting viewpoints. If AI effectively shortcuts those cognitive struggles by spoon-feeding conclusions, students may miss out on learning to navigate uncertainty. A Frontiers in Psychology paper (Jose et al., 2025) terms this the cognitive paradox of AI in education: AI can enhance learning efficiency, yet it may simultaneously induce cognitive dependency and reduce the exercise of independent problem-solving. The authors argue that AI must be integrated carefully so that it supports rather than replaces the effortful cognitive processes that build skill.
Empirical studies in recent years underscore these theoretical concerns. For instance, a controlled experiment by Akgun and Toker (2024) investigated memory retention in undergraduates using AI tools for study assistance. Students who engaged in active learning (pre-testing themselves on material) before using the AI showed better long-term retention and understanding, whereas those who relied on the AI directly to provide answers experienced a decline in memory performance over time. The prolonged AI exposure appeared to weaken their ability (or motivation) to encode information deeply, consistent with the idea that the AI was doing the heavy lifting that the students would normally do themselves. Another study in Nigeria by Ododo et al. (2024) surveyed vocational students on AI usage and critical thinking. It found that a significant portion of students admitted to passively accepting information from AI without critical scrutiny, and this passive acceptance correlated with lower engagement in class and poorer critical-thinking outcomes. Notably, students who were more worried about AI (perhaps more aware of its limitations) were less likely to trust it uncritically, suggesting that a bit of skepticism actually preserved their independent thinking. These studies illustrate a pattern: excessive trust in AI-generated answers can lead to reduced cognitive effort, confirming the caution that AI might be making some of us “dumber” in terms of self-sufficiency in thinking.
Paradoxically, it’s not all negative – the effects of AI on cognition can be context-dependent. For example, when AI is used as a collaborative tool rather than an oracle, it may actually stimulate learning. A recent study by Essel et al. (2024) introduced ChatGPT as an interactive aid in a university research methods course, with surprising results. Students who used the AI during in-class activities (with guidance) showed improvements in critical thinking, reflective skepticism, and creative problem-solving compared to a control group without AI. The AI’s instant feedback and alternative perspectives, when deliberately integrated into pedagogy, encouraged students to question their reasoning and explore multiple approaches. Notably, these students reported higher curiosity and “inquisitiveness” – they were prompted by the AI’s responses to ask more questions and dig deeper. This suggests that how AI is used determines whether it hinders or helps cognition. Structured use that prompts further inquiry (e.g. using AI to generate hypotheses or test one’s understanding) can enhance cognitive skills. In contrast, unstructured use that simply gives answers to end-user questions can encourage mental shortcuts and superficial processing.
In summary, the influence of AI on human cognition is complex and multifaceted. The instant certainty provided by AI tools risks promoting cognitive offloading and premature closure, thereby undermining critical thinking and memory retention if users become complacent. Overreliance on AI can reduce the natural engagement of our analytical faculties, as evidenced by correlations between heavy AI use and lower critical thinking scores. However, when harnessed thoughtfully, AI can also act as a catalyst for learning by providing adaptive challenges and feedback that keep users actively involved. The key challenge identified in the literature is to avoid the trap of epistemic laziness – taking AI answers at face value – and instead to maintain a stance of critical engagement with AI. Doing so requires cultivating habits of questioning AI outputs, verifying information independently, and occasionally tolerating the discomfort of not having an immediate answer. These habits are closely tied to deeper cognitive and neurological processes, as the next sections will explore. We turn now to the neuropsychological perspective, to understand what happens in the brain when we experience uncertainty and curiosity, and why these states are so vital for cognitive growth.
Neuropsychological Insights: Uncertainty, Curiosity, and Brain Development
From a neuropsychological standpoint, uncertainty and curiosity are essential drivers of cognitive development. The human brain evolved in environments where information was incomplete and survival depended on exploration and learning. Consequently, our neural architecture is finely tuned to respond to novelty and knowledge gaps. When we encounter uncertainty (for example, a question we cannot answer immediately), it typically triggers a cascade of neural processes associated with curiosity – an intrinsic motivational state that pushes us to seek information. This state is far from a mere inconvenience; it is linked to activation of reward circuits in the brain and enhanced memory formation. Researchers have found that curiosity activates dopaminergic pathways in the midbrain (including the ventral tegmental area) and the nucleus accumbens, which are regions implicated in anticipating rewards. In practical terms, the brain treats acquiring new knowledge somewhat like a reward, releasing dopamine which in turn can facilitate better learning and memory consolidation in the hippocampus. This neurochemical dance is one reason why information learned out of genuine curiosity is often retained more effectively than information learned by coercion or rote.
Studies using functional MRI have demonstrated that when individuals are in a high-curiosity state (e.g., eager to learn the answer to a trivia question they find interesting), there is increased activity in brain regions associated with memory, like the hippocampus, and in reward regions, compared to low-curiosity states. Notably, one study showed that if you spark someone’s curiosity and then present them with unrelated information during that window of heightened curiosity, they tend to remember that incidental information better too. In other words, curiosity puts the brain in a ready-to-learn mode – it primes neural plasticity and information retention broadly. This underscores that the process of grappling with uncertainty (the itch of not knowing and wanting to know) is itself cognitively beneficial.
However, what happens if we short-circuit this process? AI’s promise of immediate answers means that the moment a “nagging question” arises, one might reflexively turn to a device for resolution. We now carry “digital reference libraries in our pockets,” and the temptation to Google any doubt or query is often irresistible. Psychologically, the tip-of-the-tongue feeling – that uncomfortable state of knowing you don’t know something – is a powerful trigger for curiosity and mental search. If, at the slightest hint of that feeling, we alleviate it by asking an AI, we may be depriving the brain of engaging in that motivated search state. Over time, this could have several neurocognitive consequences:
The case of GPS and spatial memory provides a striking parallel for AI’s potential effect on general cognition. A longitudinal study by Dahmani and Bohbot (2020) found that habitual GPS use was associated with worse spatial memory and a decline in hippocampal-dependent navigation abilities. Participants who heavily used GPS showed less activation of the brain’s spatial navigation circuits and more reliance on stimulus-response learning (repeatedly following the same routes without forming a mental map). Over time, those who used GPS more frequently had a steeper drop in their ability to navigate without GPS, suggesting a use-dependent atrophy of spatial cognitive skills. In neuropsychological terms, the hippocampus (key for flexible spatial learning and memory) was underutilized, while the caudate nucleus (involved in habit learning) took over, leading to more rigid, less flexible navigation behavior. This illustrates a broader principle: when an external aid consistently handles a cognitive function, the brain may “optimize” by devoting less resources to that function, which can make the person more dependent on the aid in a self-reinforcing cycle.
Applying that principle to AI and intellectual tasks, we can hypothesize similar outcomes. For instance, if a student habitually uses an AI assistant to generate ideas for essays or solutions to homework, they might engage less in the ideation processes typically handled by the brain’s default mode network (which is active in internal mentation and creativity) and prefrontal cortex (for organizing thoughts). Over time, the student might struggle to initiate idea generation without the AI prompt, having effectively outsourced part of their divergent thinking capacity. Indeed, a recent experiment on creativity found that students who used AI (ChatGPT-3) in a creative thinking course produced more ideas and somewhat higher fluency and flexibility scores, but they also exhibited signs of cognitive fixation and lower creative confidence when over-relying on AI suggestions. The AI provided examples which helped kick-start the process (hence more ideas), but those examples also narrowed students’ focus (they tended to stick close to AI’s suggestions) and made them doubt their own creativity after seeing the AI’s output. This highlights a kind of paradoxical effect on creativity: AI can expand production but might reduce originality and self-efficacy.
Another neuropsychological aspect to consider is the development of intellectual humility and cognitive flexibility. These traits involve recognizing the limits of one’s knowledge and being able to adapt one’s thinking when presented with new evidence or perspectives. Engaging with uncertainty, debate, and even error is crucial for developing these traits. When an AI gives a single, authoritative answer, especially one that sounds correct, users can develop an illusion of knowledge – thinking they understand something just because an answer was given, even if they have not validated or deeply processed it. This ties to the concept of metacognition (awareness and regulation of one’s own thinking). Relying on AI might impair the calibration of metacognitive judgments. A student might become overconfident in a piece of information because “I got it from a highly advanced AI, so it must be right,” potentially overlooking flaws or nuances. Over time, this externalization of validation (“the AI is always right”) can undermine the habit of double-checking and analyzing – key components of cognitive flexibility and humility.
Importantly, the neurodevelopmental impact of AI likely varies with age. Children and adolescents, whose prefrontal cortex and other brain regions are still developing, might be more malleable in their cognitive habits. If children grow up never experiencing the frustration and reward of solving things on their own because an AI is always there to help, they may develop a lower threshold for confusion and a higher expectation for instant clarity. From a brain development perspective, there is concern that this could lead to stunted development of persistence and attentional control. The ability to stay focused on a challenging task (sustained attention) and the ability to manage the discomfort of not immediately knowing an answer are partly learned skills, underpinned by neural circuits connecting the prefrontal cortex with limbic (emotional) regions. These circuits strengthen when a child practices delaying gratification and working through confusion. If AI reduces those practice opportunities, children might be more prone to anxiety or abandonment of tasks when they don’t have an immediate solution – essentially a lower frustration tolerance.
This dovetails with findings on the intolerance of uncertainty trait in youth. Intolerance of uncertainty (IU) is known to be a risk factor for anxiety disorders; individuals high in IU experience strong stress when faced with the unknown and often engage in reassurance-seeking behaviors to alleviate that stress. Constant AI use for answers can be seen as a form of reassurance-seeking – a way to immediately eliminate the unsettling feeling of not knowing. A recent study during the COVID-19 pandemic (a naturally uncertainty-rich environment) found that college students with high need for closure indeed experienced higher stress and anxiety in response to the uncertainty, compared to those more comfortable with ambiguity. Although that study did not specifically involve AI, it speaks to the general principle that those who have not developed coping mechanisms for uncertainty suffer more in uncertain conditions. If AI creates a sort of “uncertainty-free bubble” in day-to-day life, it might inadvertently raise vulnerability to stress when one inevitably encounters situations the AI can’t immediately resolve (which in life, there will be many – from personal relationships to novel problems).
In sum, neuropsychology strongly suggests that uncertainty and the process of resolving it (curiosity) play a vital role in strengthening cognitive functions. Curiosity activates reward and memory pathways, leading to deeper learning, while working through uncertainty builds problem-solving networks and mental flexibility. The habitual use of AI as an uncertainty-removal tool may short-circuit these processes. It risks producing brains that are less practiced in sustained inquiry, less stimulated by curiosity’s rewards, and potentially less structurally prepared for adaptive thinking. The hippocampus may not get its workout in forming new knowledge connections if an AI spoon-feeds facts; the prefrontal cortex may not fully flex its muscles in reasoning through problems if an AI gives the solution strategy. In the next section, we broaden the lens further with a PNEI perspective, considering how these cognitive patterns intersect with stress and resilience at the systemic level, including the endocrine and immune systems.
The PNEI Perspective: Cognitive Flexibility, Stress Resilience, and Health
Psychoneuroendocrinoimmunology (PNEI) examines the intricate interplay between psychological processes, the nervous system, hormonal (endocrine) responses, and the immune system. Within this framework, chronic patterns of thinking and responding to uncertainty can have physiological consequences. PNEI research has established that psychological stress can alter hormone levels (like cortisol and adrenaline) and modulate immune function, potentially affecting inflammation and disease resistance. One’s cognitive style – including how one handles uncertainty and novelty – contributes to what the body perceives as stressful or manageable. Here, we explore how a potential overreliance on AI-provided certainty might influence stress and resilience through the PNEI lens.
A central concept is cognitive flexibility, which refers to the mental ability to switch between thoughts or adapt behavior to new, unexpected situations. Cognitive flexibility is not only a cognitive asset; it is also linked to how one’s body copes with stress. Studies have found that individuals with greater cognitive (and especially affective) flexibility tend to show higher resilience in the face of stressors. They can reframe challenges, disengage from unproductive strategies, and remain psychologically steadier under adversity. Physiologically, this might manifest as a more moderated cortisol response to stress and quicker return to baseline – essentially, a nervous system that can adapt rather than getting stuck in fight-or-flight mode. Rademacher et al. (2023) demonstrated that performance on affective flexibility tasks (how easily one can shift attention away from negative emotional stimuli) correlated with resilience measures in healthy adults. Those who were better at shifting mindsets had better stress-coping abilities by multiple questionnaire measures, indicating a robust link between flexible thinking and the capacity to withstand stress.
Why is this relevant to AI and uncertainty? Because continually relying on AI for certain answers may cultivate the opposite of cognitive flexibility – namely cognitive rigidity. If a person becomes used to a single correct answer being immediately available, they may be less practiced in considering alternatives or adapting to the absence of a clear answer. This rigidity can increase psychological distress when confronting ambiguous or rapidly changing situations that have no ready-made solution. PNEI would predict that someone who rarely tolerates uncertainty could have a more exaggerated stress response when uncertainty inevitably occurs (for instance, when an AI cannot help, or in life domains like personal dilemmas where there is no single answer). Indeed, research on NFCC (need for closure) has shown that high NFCC individuals (who dislike uncertainty) exhibit higher physiological stress responses and anxiety in uncertain circumstances. During the early stages of the COVID-19 pandemic, people with high NFCC reported more anxiety and showed more stress indicators, likely because the pervasive uncertainty of the situation was especially intolerable to them. By potentially increasing intolerance to uncertainty, AI overuse could inadvertently be diminishing an individual’s stress immunity.
A useful analogy comes from immunology: consider how a lack of exposure to germs in childhood can lead to a weaker immune system (the hygiene hypothesis). Similarly, a lack of exposure to intellectual and emotional uncertainty – if AI “sterilizes” one’s environment of unknowns – might lead to a weaker ability to cope with the unknown, a sort of psychological immune system fragility. PNEI researchers talk about the concept of stress inoculation, where exposure to manageable stressors can build resilience over time. Facing uncertainties and learning one can handle them is akin to a mental vaccine. It trains both mind and body to regulate the stress response better. Conversely, avoiding uncertainty at every turn may prevent the development of these coping mechanisms, leaving one “immunologically naive” in a psychological sense.
Chronic reliance on AI could also potentially influence the balance of the autonomic nervous system (ANS). The ANS has two major arms: the sympathetic (fight-or-flight) and the parasympathetic (rest-and-digest). A resilient individual typically can swiftly activate fight-or-flight when needed, but also return to a calm state when the challenge passes. The vagus nerve, a key parasympathetic conduit, plays a role in calming organs after stress – its tone is higher in people who adapt well to stress. There is evidence that psychological traits like uncertainty tolerance correlate with better autonomic regulation, whereas those who catastrophize or demand certainty show signs of chronic sympathetic arousal or poor vagal tone (e.g., elevated heart rate, poor heart rate variability) during uncertainty. If AI encourages a habit of immediate certainty or control, an individual might find any situation where they lack control (where AI can’t fix it) to be disproportionately triggering, possibly keeping their physiology in a prolonged state of arousal in such moments.
The endocrine aspect is also noteworthy. Cortisol, the primary stress hormone, follows a diurnal rhythm but spikes under uncertainty and threat. People who worry or ruminate (cognitive patterns opposite of flexibility) often have elevated or dysregulated cortisol levels, which can impair immune function and even neural health if sustained (high cortisol can inhibit hippocampal neurogenesis, for example). PNEI studies have documented that cognitive interventions that increase one’s sense of control or adaptability can reduce stress hormone reactivity. By that logic, a lifestyle that reinforces a false sense of control (always getting an answer) might leave one’s cortisol regulation poorly calibrated. When reality deviates from that expectation of control, cortisol might overshoot, contributing to anxiety or panic. In contrast, individuals trained to accept and navigate uncertainty often show more moderate cortisol responses – they interpret uncertainty as a challenge rather than a threat.
Furthermore, chronic stress and anxiety have known effects on the immune system, such as reducing lymphocyte counts and impairing vaccine responses. While speculative, one could ask: if AI usage patterns lead to higher baseline anxiety (due to less uncertainty tolerance), could that subtly influence immune resilience? For example, a student who has never coped with problem-solving uncertainty might experience academic challenges as severe stress, potentially enough to disturb their sleep or raise their inflammation markers at exam time, more so than a student used to grappling with tough problems. Over years, small differences in how often one’s stress response is triggered (and for how long) can accumulate into meaningful differences in health outcomes, PNEI research suggests.
Another PNEI-relevant factor is social immunity and learning. Humans often buffer stress through social interactions – discussing uncertainties with friends, seeking group problem-solving, etc. If instead a person turns inward to an AI for answers, they might be bypassing opportunities for social support and co-regulation. While not inherently a physiological harm, there is evidence that perceived social support moderates stress responses (people feel and physiologically respond to stress better when not isolated). Overusing AI as an answer machine might foster a more individualistic coping style (just ask the app) rather than reaching out, which could have downstream effects on mood and stress over time, possibly even contributing to loneliness or reduced oxytocin release (the hormone linked to bonding and stress reduction).
On the flip side, it’s worth considering that for some individuals, AI might reduce stress in positive ways – for instance, reducing the cognitive load on someone with executive function difficulties or alleviating anxiety by providing quick reassurance about general questions (“What are the symptoms of X illness?”). PNEI effects are not universally bad; if AI relieves maladaptive stress (like constantly worrying about unknown information), that could be beneficial. The balance is key: a little uncertainty is growth-promoting, but too much uncertainty can be traumatic. AI might be a tool to cap excessive uncertainty (e.g., clarify a confusing concept so a student doesn’t despair) but it should not eliminate the productive uncertainty that fosters learning. This echoes the concept of the Zone of Proximal Development in education – learning is maximized when a learner is challenged just beyond their current ability, but with support. AI could provide that support to keep uncertainty in the optimal zone. The risk arises when AI overshoots, removing the challenge altogether rather than scaffolding the learner through it.
In conclusion, the PNEI perspective suggests that maintaining a healthy relationship with uncertainty is not only good for cognitive development but is also intertwined with stress regulation and overall resilience. Cognitive flexibility – which can be nurtured by engaging with uncertain situations – is associated with stronger stress response modulation and better mental health. Conversely, cognitive rigidity and high need for closure can exacerbate stress and anxiety. If AI tools inadvertently encourage the latter (by always providing closure), they could contribute to a population that is less resilient and more stress-prone when facing the unknown. This highlights a broader developmental risk: overreliance on AI might produce not only narrower minds but also more brittle ones. Society will need to pay attention to these psycho-biological dynamics as we integrate AI into everyday life. In the next section, we will look at concrete examples and evidence of how behavior and cognitive patterns are already changing in the presence of AI and related technologies, before turning to a discussion of societal implications.
Empirical Evidence and Case Studies
To ground the theoretical and interdisciplinary insights discussed above, this section presents several case studies and empirical findings that illustrate how AI and digital information technologies are impacting human cognition and behavior. Each case touches on a different cognitive domain or real-world context, providing concrete examples of the phenomena described in prior sections. These include the effects of instant information on memory (digital amnesia), the impact of navigation AI on spatial orientation (GPS and hippocampal changes), the influence of AI assistance on learning outcomes (critical thinking and retention in education), and the nuanced relationship between AI tools and creativity.
Case Study 1: Digital Amnesia – Memory Offloading in the Information Age
The term “digital amnesia” has been coined by researchers to describe the phenomenon whereby individuals are increasingly unable to recall information that they readily store or access on digital devices. In other words, people forget things they are confident can be retrieved online or from a device. The classic demonstration of this was Sparrow et al.’s work (2011) on the Google Effect, which showed that people who believed a fact would be accessible later via computer were less likely to remember it themselves. Although Sparrow’s study is more than a decade old, its findings have been reinforced by recent observations in the smartphone era.
A 2021 survey-based study (cited in Lodha, 2022) noted that a large proportion of adults could not recall phone numbers or appointment details that were stored in their phones, indicating an externalization of those memory tasks. In effect, the smartphone (with internet access) has become an episodic memory repository. More recently, a cross-sectional study of nursing students in 2025 provides empirical data on digital amnesia and its correlates. In that study, nearly 80% of students exhibited at least moderate levels of digital amnesia as measured by a “digital amnesia scale” – meaning they frequently rely on digital devices for recalling information, and experience memory lapses for things not stored on their phone or computer. The study also found that students with higher smartphone dependency had significantly higher digital amnesia scores. This suggests a direct relationship: the more one depends on a device, the less one bothers to retain information in one’s own memory. The authors labeled overuse of technology as a “rising threat to human memory,” even referring to it as technology-induced atrophy of memory.
From a case perspective, consider a common scenario: A person wants to remember an interesting article or a to-do item, and rather than committing it to memory or jotting down key points, they bookmark the link or set a digital reminder. While this offloads the task effectively, later that person might find they have little recollection of the article’s content itself – just the act of saving it. Many readers likely resonate with this: the number of web pages we “save for later” or questions we quickly Google, only to promptly forget the details after getting the answer, is non-trivial. The cognitive cost here is subtle: by not engaging in effortful encoding (which strengthens memory), we form weaker memories. If needed again, we must rely on the external source once more, thus reinforcing a loop of dependency.
Beyond inconvenience, digital amnesia raises concerns about what happens if the technology is unavailable or if we encounter a situation where recall, not just recognition, is needed. For example, a young professional might frequently use an AI-powered knowledge base at work to recall protocol details. If suddenly asked to perform without that aid (say, during a power outage or a job interview scenario without devices), they may struggle far more than someone who had practiced recalling the information unaided. There is also a social dimension: human conversations often involve exchanging facts or anecdotes from memory. If everyone defaults to “let me check my phone” to retrieve even simple knowledge, the fluidity of dialogue and the demonstration of shared understanding could be impeded.
One might wonder: does it matter if memory is externalized, as long as we have efficient access? Some argue that freeing the brain from rote memorization allows more focus on higher-order thinking. This is valid to an extent – we don’t need or want to mentally store every trivial datum (e.g., dozens of phone numbers). However, there is a cognitive training effect that comes from memorization and recall; it exercises attention, and forms the building blocks of knowledge schemas. When everything is looked up as needed, people might have factoids but not a cohesive knowledge structure. Educators worry that students who constantly look up answers fail to develop a deeper understanding because they never hold information long enough to see connections. Memory and critical thinking are intertwined: knowing a base of facts helps in analyzing new information and spotting errors. If one’s internal knowledge base is sparse (since “the AI knows it, I don’t have to”), the ability to critically evaluate AI’s suggestions may be diminished, a point we return to later.
The BMC Nursing (2025) study also linked digital amnesia with physical symptoms and well-being. Intriguingly, it found that students with high digital amnesia and smartphone dependence reported more somatic symptoms (headaches, fatigue, etc.). The correlation doesn’t prove causation, but it opens questions. Perhaps overuse of devices (and resultant memory issues) is part of a lifestyle that includes poor sleep or eye strain (hence somatic complaints). Or it could be that the stress of over-reliance – e.g., anxiety when one’s phone is missing – manifests physically. Another possibility is that heavy device users engage less in activities that promote physical well-being (like exercise), but that strays from our focus.
In closing this case, digital amnesia is a clear illustration of cognitive offloading’s effect on memory. It demonstrates in everyday life what the “death of uncertainty” concept implies: if we constantly resolve uncertainty by quick information lookup, we cease internalizing knowledge. Memory is a “use it or lose it” faculty; without regular use, our ability to recall can atrophy or fail to develop fully. While external memory via AI can be extremely useful (we can store vastly more information externally than in our brains), the skill of memory – for things we need to think critically about or use creatively – remains important. This case underscores the importance of balancing technology use with activities that keep our biological memory engaged. Encouraging practices like summarizing what one learned without immediately relying on notes, or periodically doing tasks without digital aids, could be ways to mitigate digital amnesia. Otherwise, we risk cultivating minds that “know” far less than it might outwardly appear, because their knowledge resides largely in the cloud.
Case Study 2: Navigation by GPS – Externalizing Spatial Cognition and the Brain
Human spatial navigation is a domain that has been profoundly affected by AI and related technologies, making it an excellent case for examining how cognitive offloading can alter behavior and even brain structures. Global Positioning System (GPS) navigation apps, powered by algorithms (a form of narrow AI), have become a staple for drivers and travelers. These apps provide turn-by-turn directions to destinations, relieving the user of the need to consult maps or remember routes. The convenience is undeniable, but researchers have questioned what happens to our internal navigation skills when GPS is always available. The case of London taxi drivers is often cited: historically, to earn the license known as “The Knowledge,” drivers had to memorize London’s labyrinthine streets, a process that took years and was associated with an enlargement of their hippocampus (the brain’s memory and navigation center). Today, many drivers simply follow GPS instructions and may never develop that deep spatial knowledge.
A Scientific Reports (2020) study by Dahmani & Bohbot provides empirical evidence on this issue. The researchers assessed 50 drivers’ lifetime GPS usage and their performance on spatial memory tasks in a virtual navigation environment. They found a clear negative correlation between GPS reliance and spatial memory: those who frequently used GPS for navigation performed worse when later asked to navigate without it. Not only did they get lost more easily, but they also struggled with tasks like pointing to a destination from an unfamiliar location (which indicates poor cognitive mapping). This suggests that heavy GPS users might not be encoding spatial relationships effectively since they never needed to during guided travel.
Crucially, this study included a longitudinal component. Thirteen participants were retested three years after the initial assessment. The results were striking: those who had increased their GPS use the most over the three years showed the greatest decline in hippocampal-dependent spatial memory. In other words, more GPS = sharper decline in internal navigation ability over time. The authors note that this appears to be a causal relationship of sorts – it wasn’t simply that people with poor navigation skills used GPS more; even those with initially fine navigation ability who then switched to GPS saw their skills deteriorate. This addresses a potential confound (that maybe GPS users are just people who are bad with maps; but no, the increase in GPS use predicted decline).
Neuroscientifically, what might be happening is a down-regulation of the hippocampal system and up-regulation of the caudate nucleus system for navigation. The hippocampus supports a “cognitive map” strategy – understanding the spatial layout and one’s relation within it. The caudate supports a stimulus-response strategy – “at this landmark, take a left” habit learning. GPS essentially turns navigation into a series of stimulus-responses (turn when told). As a result, users may not engage hippocampal circuits to form a map; they instead form a habit of following instructions. Over time, the hippocampus (like an unused muscle) could weaken. Some studies even speculate that over-reliance on GPS could, in theory, increase one’s risk for cognitive decline conditions like Alzheimer’s, where the hippocampus is affected – this is speculative but follows the logic that unused neural circuits atrophy. Conversely, activities that challenge spatial memory (like orienteering or playing certain video games that require navigation) can strengthen the hippocampus. GPS use might be depriving some people of that mental exercise.
The implications of this case study go beyond navigation. It is a concrete example of how outsourcing a cognitive function to technology can lead to a measurable decrease in that function’s performance when the technology is absent. It parallels what might happen in other domains: If one always uses a calculator, their mental arithmetic speed decreases; if one always uses language translation apps, their ability to learn or remember new language phrases might diminish. For AI and the “death of uncertainty” thesis, the GPS case is instructive because navigation used to be an uncertain endeavor – one had to plan routes, sometimes get lost and find one’s way. That uncertainty has been almost eliminated by GPS. But as a result, new drivers may never build a robust spatial knowledge base. In a very real sense, a part of human cognition (navigational sense) is being replaced by machine guidance, and the neural trade-offs are observable.
From a personal standpoint, many of us have anecdotes about being helpless without GPS in cities we have driven through multiple times or struggling to recall routes we’ve taken because we paid no attention while following the blue line on a screen. This points to the risk of cognitive laziness: when external aid is reliable, our brains optimize by not bothering to do the work. It’s an efficient system, but it has the side effect of fragility – if the aid is gone, so is our capability. In one amusing but telling study, researchers found that people using GPS were worse at estimating distances and even sometimes disoriented about cardinal directions, compared to those using paper maps, because the GPS users never had to actively orient themselves. They were effectively on “autopilot.”
The positive side of this case is that it’s reversible to a degree. If someone stops using GPS and starts navigating the old-fashioned way, they often can rebuild their spatial skills. The brain is plastic; hippocampal volume can increase with use even later in life (as seen in older adults who engage in spatial learning tasks for a few months). Thus, this case suggests a general principle for all cognitive domains: skills not often used can be revived by practice. If we find that AI has made us weaker in certain cognitive abilities, targeted training or simply periods of abstaining from the AI to practice manually could remediate some of the decline.
In the context of AI and certainty, the GPS story is a microcosm: A domain of uncertainty (finding one’s way) was eliminated by a technology, yielding convenience at the cost of internal skill and flexibility. There is also a safety argument: some experts worry that overdependence on GPS might be dangerous if the system fails – drivers might be literally lost, or unable to make decisions (like detouring when GPS maps are outdated due to new construction). Similarly, over-reliance on AI in critical thinking might be dangerous if AI information is wrong – will people catch errors, or follow the “turn-by-turn” instructions of AI into a metaphorical ditch? Cases have already been reported of individuals trusting GPS directions that led them astray (down unsafe roads, etc.). In an analogous way, individuals trusting AI outputs (like financial advice or medical information) without their own sense of direction could be led into poor decisions.
Overall, the GPS case study vividly demonstrates premature cognitive closure in action: the turn-by-turn instructions provide a closure (“go this way”) that prevents the individual from engaging with the broader uncertainties of navigating a complex environment. The result is a kind of cognitive narrowness and dependency. As we integrate more AI into daily tasks, this pattern could replicate elsewhere. It calls for strategies to keep our cognitive maps – literal and figurative – active. For example, some have suggested that map apps could have modes that allow a bit of “game” or challenge (e.g., only giving partial directions, prompting the user to figure out a segment) to keep the user’s spatial reasoning engaged. Similarly, perhaps educational AI could sometimes withhold the final answer and instead guide the student through thinking, ensuring they still exercise their mental faculties.
Case Study 3: AI in Education – Impacts on Learning, Critical Thinking, and Retention
The integration of AI tools in educational settings is accelerating, presenting a real-world laboratory for observing how AI-generated certainty affects cognitive development. From intelligent tutoring systems to AI writing assistants, students now have access to technologies that can answer questions, provide hints, or even generate essays at a level of fluency that mimics human writing. This case study reviews emerging evidence on how such tools are influencing student behavior, critical thinking, and learning outcomes. It reveals a nuanced picture: AI can both enhance and undermine cognitive growth, depending largely on how it is used.
One area of focus has been the effect of AI (especially Large Language Model (LLM)-based chatbots like ChatGPT) on students’ study habits and skills. A pertinent example comes from Essel et al. (2024), who conducted an experiment in a university setting. In their study, one group of students (the experimental group) was encouraged to use ChatGPT as a support tool during research methods classes, while a control group learned without AI assistance. The results initially seem counterintuitive to the “AI harms thinking” narrative: the AI-using students showed greater improvements in critical thinking, creativity, and reflective thinking skills than the control group. They also exhibited traits like inquisitiveness, innovative search strategies, and even intellectual doubt (healthy skepticism) more than their peers. How can this be? The key lies in how the tool was incorporated. In this case, instructors guided students to use ChatGPT to generate multiple perspectives on a problem, to ask “what if” questions, and to critique the AI’s answers. Essentially, the AI was leveraged to stimulate deeper engagement – it acted as a catalyst for discussion and a mirror that students could examine their thinking against. For example, a student might ask ChatGPT to outline arguments on a topic, then critically compare those to the ones they had in mind, identifying weaknesses or biases in the AI’s version (or their own). In such a scenario, the AI’s instant answers did not shut down thinking; rather, they opened new avenues to explore. This demonstrates that AI’s effect is context-dependent: used passively, it might encourage intellectual laziness; used actively, it can prompt further curiosity.
However, not all evidence is so positive. Other studies and reports highlight instances where AI use correlates with decreased effort and skill development. A headline-grabbing example came from the aftermath of the COVID-19 remote learning phase, where some students used AI-based “homework helper” apps to complete assignments. Educators observed that while homework accuracy might have improved (because AI provided correct answers), test performance and independent problem-solving skills declined for those students, suggesting they weren’t internalizing the knowledge or methods. In one case, a math teacher reported that students who heavily relied on a step-solving app could no longer set up even simple algebra equations on their own; they had become reliant on the app to initiate every problem solution.
Empirical research aligns with these observations. Bai et al. (2023) examined how an AI tool (an LLM-based assistant) affected memory retention in a controlled study. Students who answered questions with the AI’s help had lower recall of the material a week later than those who answered questions through their own effort or after a pre-testing exercise. The AI group breezed through practice with high scores (since the AI helped them get answers right), but when the AI was not available later, they struggled to remember the content – a sign of illusion of competence fostered by the AI support. The group that faced the questions initially without AI (having to wrestle with them, even getting many wrong) ended up with better long-term retention, presumably because their initial struggle forced them to process the material more deeply (the well-known “testing effect” in learning). This study underscores a risk: AI tools can give students (and teachers) a false sense of mastery, as immediate performance may improve while true understanding lags.
Critical thinking and problem-solving also show mixed outcomes with AI usage. Ododo et al. (2024), referenced earlier, found that vocational education students who used AI frequently were at risk of becoming passive recipients of information. Many of these students admitted they would accept AI-provided answers without verifying sources or thinking critically about them, highlighting a form of mental complacency. The study noted gender differences (male students perceived their critical thinking was more threatened by AI than female students did, possibly due to differing usage patterns or confidence levels), but the overarching message was that if students treat AI answers as infallible, their critical engagement declines. As a remedy, the authors suggest integrating explicit instruction on source verification and encouraging independent thinking even when using AI. For instance, an assignment might require students to compare the AI’s answer with information from a textbook or to identify potential biases in the AI’s response.
Another angle to consider is assessment and academic integrity. The availability of AI that can generate essays and solve problems raises the temptation for students to bypass doing the work themselves. This is not just a moral or rule-based issue, but a cognitive one: if a student has an AI write their essay, they forgo the practice of organizing thoughts, developing arguments, and learning through writing – which are key cognitive developmental tasks in education. Some educators worry about a generation of students who might make it through school with polished assignments but underdeveloped writing and reasoning abilities because they leaned on AI. This is spurring a rethinking of assignments (more oral exams, in-class writing, etc., to ensure students exercise the skills) and also spurring the development of AI-detection tools. But interestingly, cat-and-mouse games of AI writing and AI detection could further shape cognition – students might learn to write in a way to avoid detection (maybe simplifying their prose), potentially affecting their writing style or development.
On the more optimistic side, AI can provide personalized learning that is cognitively beneficial. Adaptive learning systems use AI to find an optimal challenge level for students – not too hard (to avoid frustration) and not too easy (to avoid boredom). This could foster a state of “flow” in learning, which is ideal for skill growth. One could imagine AI tutors that not only provide answers but ask Socratic questions to lead the student to find the answer, preserving the uncertainty and cognitive effort necessary for learning. Preliminary evidence suggests adaptive AI tutors can improve mastery of subjects like math by tailoring practice to each student’s gaps. However, even proponents note the importance of keeping the student actively thinking. If the AI simply tells the solution, it backfires.
A particularly telling case comes from an observation at a university: an instructor allowed students to use ChatGPT in an exam but instructed them that they would be graded not just on correct answers but on their critical evaluation of the AI’s answers. Students were forced to engage meta-cognitively, e.g., “ChatGPT says X, but I recognize a possible error or alternative interpretation…”. This exercise reportedly yielded deeper discussions than usual, as students could not just parrot an answer – they had to analyze it. In that scenario, AI became a tool to teach critical thinking about AI. It’s an interesting model: rather than banning AI to protect thinking, incorporate AI and demand students critique it, thereby exercising higher-order skills. Early pedagogical reports indicate students become more aware of issues like AI hallucinations or bias, and learn the valuable skill of verifying AI-generated content.
Finally, consider creativity and original thought in student work. Habib et al. (2024) studied a creative thinking class where students had access to AI idea generators. They found an increase in the number of ideas (fluency) and categories of ideas (flexibility) for those using AI. This suggests AI can serve as a brainstorming partner to expand creative output. However, they also documented downsides: cognitive fixation and reduced creative confidence. Cognitive fixation occurred when students latched onto AI-suggested ideas at the expense of generating their own novel concepts – essentially, the AI’s suggestions anchored their thinking. Reduced creative confidence was reported by some students who felt that the AI’s ideas or wording were “better” than what they would come up with, leading them to trust the AI over their own imagination. In an educational context, nurturing creativity partly involves pushing students to take intellectual risks and trust their voice. Over-reliance on AI’s polished outputs could discourage that in some learners. A balance might be to use AI for inspiration, but then intentionally turn it off and encourage students to diverge from those suggestions, or to iterate on them originally.
In summary, this educational case study reveals that AI’s integration into learning is a double-edged sword. It can either be a crutch that weakens the muscles of memory and critical thinking or a scaffold that helps students reach higher levels of understanding and skill. The decisive factor is how students and educators engage with the uncertainty and answers AI provides. If AI is allowed to become an answer vending machine – delivering premature closure to academic problems – students may experience superficial learning, poor retention, and dwindling problem-solving abilities. If instead AI is used as a sparring partner – something to question, verify, and learn from through contrast – it can enrich the learning experience by introducing multiple perspectives and immediate feedback. The onus is on educational strategies to shape AI use toward the latter. This case reinforces that premature cognitive closure is not an inevitability of AI use; it is a pitfall we must consciously avoid through thoughtful pedagogy and habituating students to remain curious and skeptical, even in the face of AI’s instant answers.
Case Study 4: Dogmatic Machines? AI, Bias Reinforcement, and Epistemic Rigidity
A final case study concerns the risk that people may become dogmatically shaped by machine-provided answers, adopting AI outputs as gospel and losing the willingness or ability to question underlying assumptions. While the previous case looked at individual cognitive skills, this one touches on broader societal cognition and belief formation. It addresses scenarios where AI systems, due to their training data or algorithms, might present information in a biased or one-sided manner, and users uncritically absorb those views – a recipe for reinforcing dogmatism and polarization.
Modern AI language models are trained on vast swaths of internet text, which inevitably contain human biases, popular opinions, and dominant cultural narratives. If a user asks an AI a complex question – say, about a contentious political or social issue – the answer they get might sound confident and factual but could reflect particular biases (e.g., Western-centric viewpoints, or the biases of the prevalent sources in the data). If users accept these confident answers without skepticism, they risk having their own viewpoints channeled by the AI’s perspective. Automation bias is a well-documented phenomenon wherein people trust an automated system too much, even in the face of evidence it might be wrong. Applied to AI advice or information, automation bias might lead someone to agree with an AI-generated statement simply because “the computer said so.”
One emerging piece of evidence in this realm comes from studies on AI and misinformation. Williamson and Prybutok (2024) discuss how AI systems can exploit cognitive vulnerabilities in users, including our tendency to accept information that is fluently delivered and aligns with our preconceptions. In experiments where AI was used to persuade or nudge decisions, researchers found about a 70% success rate in steering people toward specific choices by phrasing suggestions in certain ways. Users often were not aware they were being manipulated; they interpreted the AI’s outputs as neutral recommendations. This raises the concern that AI could reinforce existing beliefs or subtly shape new ones while giving the illusion of objectivity. For example, if an AI search algorithm tends to show news from one political leaning more often (even inadvertently, based on engagement metrics), a user may gradually become more entrenched in that worldview, trusting that “this is the information I keep seeing, so it must be true or widely accepted.”
A case in point was observed with YouTube’s AI-driven recommendation system (though not an LLM, it’s an AI selecting content). A few years back, it was reported that the recommendation algorithm often led users to increasingly extreme content on certain topics because that content maximized watch time. People who started with a relatively neutral video could end up in a rabbit hole of more radical videos due to the AI’s optimized suggestions. This “radicalization via algorithm” case shows how AI can contribute to dogmatic or extremist thinking by continuously validating and amplifying a certain line of thought, removing exposure to contrary viewpoints. If a person doesn’t actively seek uncertainty or counterevidence – which is less likely if they trust the AI’s choices – they enter an echo chamber curated by AI.
Large language models haven’t been around long enough for extensive longitudinal studies on belief formation, but anecdotal evidence and short-term studies suggest a cautious approach. One worry is that as these models become more integrated (e.g., an AI personal assistant that answers all sorts of life questions), users might start deferring to them on moral or ethical questions. If an AI gives a particular moral stance confidently, does the user adopt that stance? Early user studies show that many people do attribute authority to AI responses, sometimes even in areas that are subjective or value-laden. For instance, when asked to resolve a moral dilemma, some users considered the AI’s answer as akin to a knowledgeable advisor’s, rather than just one input. This indicates a potential for outsourcing moral or critical judgment, which is quite concerning from a cognitive autonomy perspective.
Another aspect of dogmatism is the loss of intellectual humility. Intellectual humility involves recognizing the limits of one’s knowledge and being open to new information. Instant AI answers, especially when consistently accurate for factual queries, might inflate users’ confidence in their “knowledge.” People might start to think they know about a topic after reading a single AI summary, without realizing the nuances or uncertainties that exist. In academic contexts, there’s a term “the Wikipedia syndrome” where students read a summary and believe they have mastered a topic. AI could amplify that effect because it can produce even more comprehensive-sounding summaries. The danger is a shallow understanding combined with high confidence, a classic recipe for dogmatism (since one feels certain, but that certainty is unfounded on deep comprehension).
There is also the risk of “one-size-fits-all” answers. Traditional learning encourages comparing multiple sources and perspectives. AI might provide a synthesized answer that averages those perspectives into a single narrative, which can mask the fact that alternate interpretations exist. If users stop seeking multiple sources because the AI seems to provide a complete answer, minority viewpoints or less popular theories might get ignored, and a sort of monolithic viewpoint could emerge for many. This has societal implications: public opinion could become more homogenized around answers AI tends to give, or split if AI output differs across platforms or settings (leading different groups to trust different AI as their authority, each dogmatically clinging to their AI-backed “truth”).
The flip side case to consider is where AI is outright wrong (the phenomenon of AI hallucinations) and users trust it, leading to strongly held false beliefs. There have been cases of AI providing incorrect legal or medical advice. If a user unquestioningly follows that, they might later insist on that incorrect information as true – “because I got this answer from a sophisticated AI.” In one humorous but telling incident, an AI declared that a particular fake historical event was real, and some users repeated this fact on forums, citing the AI as the source. This shows a readiness in some to transfer trust from traditional human experts to AI, sometimes even when evidence contradicts the AI.
To counter this, some have proposed AI transparency and education. If users understand that AI can be fallible and see references or probabilities, they might be less prone to blind acceptance. In practice, OpenAI, Google, and others are working on making AI more transparent about uncertainty (e.g., saying “I’m not sure, but here’s a guess” rather than always giving a firm answer). There’s also the concept of an “AI literacy” movement – teaching people how AI works, its limitations, and how to critically evaluate its output. This could be crucial to prevent a generation of dogmatic AI followers.
One more case angle: consider group dynamics in the presence of AI. If in a meeting, an AI tool gives an analysis, do team members hesitate to dissent, perhaps assuming the AI has processed more data than they could? Some corporate trial runs of AI decision-support have noted that junior staff might defer to an AI’s suggestion even when they have a gut feeling it’s not right, leading to group decisions that are suboptimal. This parallels how people often defer to a confident human leader’s opinion (groupthink), except here the “leader” is a machine. Such deference can ossify group thinking – once the AI’s answer is presented, discussion might shut down, leading to premature consensus on a course of action without thoroughly exploring alternatives.
In conclusion, this case study illustrates the risk that AI-provided answers, especially when delivered with confidence and finality, can foster epistemic rigidity – a tendency to stick with an answer or viewpoint without openness to question or revise it. If individuals and societies begin to anchor on AI outputs as definitive, we may see a decline in the richness of debate, a reduced habit of cross-verifying facts, and an increase in the polarization effect if different AI systems present slightly different slants (with users clinging to “their AI’s” position). It’s a scenario where uncertainty is not just dying; it’s dead and buried by an illusion of knowledge. Avoiding this fate will require conscious effort: designing AI to encourage questions rather than suppress them, and cultivating in users an attitude that even the best AI can be wrong or incomplete – thus preserving a healthy dose of uncertainty and skepticism in our cognitive lives.
Discussion
The case studies and evidence presented above paint a comprehensive yet complex picture of AI’s impact on human cognition. In synthesizing these findings, several key themes emerge. First, the effect of AI on cognitive processes is highly context-dependent. AI tools have the capacity to both augment and undermine cognitive skills. The outcome largely hinges on user engagement: active, critical use of AI tends to yield cognitive benefits, whereas passive, uncritical reliance tends to produce cognitive costs. This dual potential can be viewed through the lens of cognitive science as a classic trade-off between cognitive labor and cognitive ease. AI offers unprecedented ease – answers without effort – but it is the effortful struggle with uncertainty that often yields growth in understanding, memory, and problem-solving ability.
One recurring theme is the offloading of cognitive work onto AI and its consequences. Offloading is not inherently negative – humans have been doing it for ages with written language (external memory), calculators (arithmetic), and more. What is novel with AI is the breadth of tasks now offloadable and the natural language interface that makes the offloading nearly frictionless. We saw that with memory (digital amnesia) and navigation (GPS dependence), offloading can lead to disuse atrophy of skills and knowledge bases. In educational contexts, offloading the formulation of ideas or answers to AI was shown to weaken retention and problem-solving capabilities unless carefully managed. Thus, a crucial question arises: how do we strike the right balance between using AI as a helpful tool and not letting it become a crutch that impairs our cognitive fitness? This is analogous to balancing the use of assistive devices in physical domains – e.g., we use cars to move faster but still recognize the value of exercise for our bodies. Similarly, we might harness AI for efficiency but still need “cognitive exercise” to keep our minds sharp. Structured constraints, such as deliberately solving certain problems without AI or using AI in ways that require verification and analysis, may serve as cognitive exercise regimes.
Another major theme is the role of uncertainty in driving curiosity, learning, and resilience. Uncertainty is often uncomfortable, but as the evidence suggests, it plays a pivotal role in motivating exploration and activating reward pathways that reinforce learning. When AI continuously resolves uncertainty immediately, it effectively short-circuits this process. The discussion highlighted that this could lead to a decrease in curiosity over time – why wonder about anything when answers are a quick query away? There is a risk of nurturing a habit of intellectual impatience, where one is disinclined to mull over unknowns or engage in open-ended inquiry. From a neuropsychological perspective, constantly living in a state of immediate certainty might reduce activation of dopaminergic curiosity circuits, potentially making learning less rewarding and more of a passive intake of information. This could especially impact younger individuals; their baseline for how effortful or gratifying learning should be might shift toward needing instant answers to feel satisfied, potentially diminishing perseverance in research or problem-solving tasks that don’t have instant solutions.
The PNEI angle added that coping with uncertainty and mild stressors fortifies our mental and physical resilience. The discussion thus suggests that an over-sanitized, certainty-filled mental environment (facilitated by AI) could ironically leave individuals more fragile in the face of unexpected challenges. To borrow a term from Nassim Nicholas Taleb, systems (including the human mind) often exhibit “antifragility,” getting stronger under a degree of variability and stress. If AI removes too much variability, we might be depriving ourselves of antifragility. This points to a somewhat paradoxical recommendation: embrace a bit of uncertainty deliberately. For example, educators might intentionally pose questions that AI cannot directly answer, forcing students to grapple and debate – in essence, injecting uncertainty back into an AI-rich learning environment to ensure students build resilience and cognitive flexibility.
A critical part of the discussion deals with epistemic humility and critical thinking in the face of AI. The concern is that AI’s authoritative tone and breadth of knowledge can imbue users with a false sense of security in the information provided. Our case on dogmatism and bias reinforcement demonstrated the perils of taking AI answers at face value, potentially amplifying biases or creating new echo chambers. This raises the societal need for improving “AI literacy” – much as we emphasize media literacy to help people critically evaluate news sources, we now must teach how to critically evaluate AI outputs. This includes understanding that AI can be wrong, biased, or incomplete, and encouraging cross-checking with primary sources or alternative perspectives. The discussion might propose concrete steps: for instance, AI systems themselves could be designed to present answers with confidence levels or highlight contentious points (some efforts in this direction are underway, like Google’s AI snapshots that also list source articles). Another idea is developing personal or community norms such as: “When I get an answer from AI, I will seek a second opinion – whether another AI, a human expert, or a trusted database – especially on important matters.” Normalizing a bit of doubt can counteract the drift to dogmatism.
The educational implications discussed suggest a transformation in pedagogy. Traditional education often valued rote learning, which is now less relevant when facts are readily accessible. The focus needs to shift even more toward skills like critical analysis, creativity, and learning how to learn – areas where AI can be either a foe or ally. For example, if writing an essay becomes more about prompting an AI and then editing, educators might grade students on the quality of their prompts and their ability to improve AI-generated content. This still engages the student’s critical thinking and understanding, rather than the final prose alone. It’s a reimagining of tasks to ensure the human is not out of the loop, but rather working with AI in a meaningful way. Likewise, in professional settings, training programs might include scenarios of “AI gave a recommendation – now troubleshoot and validate it” as a core competency.
Societal risks were also a key part of our exploration. Beyond individual cognitive skills, widespread AI use could influence collective human behavior and values. For instance, if AI-driven automation reduces the need for human problem-solvers in certain domains, fewer people may train in those domains, which could lead to a knowledge de-skilling at the societal level (e.g., fewer people know how to do mental math or navigate without GPS, as seen). On the extreme end, if crucial knowledge becomes heavily concentrated in AI systems that few understand, society might become vulnerable – consider scenarios where people can’t verify or challenge AI decisions because they lack the requisite knowledge and there aren’t enough independent experts. This touches on an ethical dimension: maintaining human oversight and competence as a fail-safe.
The discussion should also acknowledge that not all uncertainty is desirable to preserve. Part of progress is removing certain uncertainties (we don’t want uncertainty in whether our bridge will hold or our medicine works). AI can help reduce harmful uncertainty (like diagnosing diseases faster). The target of concern is excessive cognitive ease in day-to-day intellectual life that hampers growth. It’s a call for a balanced diet of mental challenges – just as we wouldn’t recommend a lifestyle of never moving because machines can move for us, we shouldn’t accept a lifestyle of never thinking deeply because machines can “think” for us.
An interesting nuance is the differentiation between information and understanding. AI provides information; understanding is constructed in the human mind, often through effort and synthesis. This echoes an educational adage: “telling is not teaching, and listening is not learning.” AI tells; if humans just listen (or read), it doesn’t guarantee learning. True understanding often requires wrestling with the material oneself – summarizing it, questioning it, applying it. The discussion reinforces that while AI can supply endless information (and even coherent explanations), we must ensure that individuals process that information actively to transform it into personal understanding. If AI becomes a shortcut that skips that personal processing, the user may have an illusion of knowledge that collapses under pressure (like the student who can’t solve problems without the app).
From a policy and design standpoint, the discussion might propose several mitigation strategies to address the risks identified:
The discussion acknowledges that AI is here to stay and offers immense benefits. The goal is not to reject AI but to use it wisely. The recurring message is the importance of maintaining epistemic vigilance and flexibility. As long as humans remain curious, question-asking beings in partnership with AI rather than passive consumers of AI outputs, the negative effects can be mitigated. In fact, AI might then truly augment human intellect by handling tedious parts and freeing humans for more creative and complex endeavors – the original optimistic vision of “intelligence amplification.” Achieving that outcome requires conscious effort now to avoid the path of least resistance that leads to cognitive atrophy.
Finally, this discussion leads naturally into a call for more research. Many of our claims, especially on long-term neurocognitive effects, are based on theoretical extrapolation or early evidence. Longitudinal studies will be needed to see how, for instance, a child who grows up with AI search and homework help differs in cognitive development from those who didn’t – akin to studies in the past on how calculators in school affected math learning (those initially feared effects turned out manageable with adaptation). We should also research intervention efficacy: what strategies effectively maintain curiosity and critical thinking in an AI-rich environment? Are there differences across populations – perhaps individuals with certain cognitive styles are more prone to the lulling effects of AI whereas others naturally stay skeptical?
In conclusion, the discussion underscores a proactive stance: recognizing the “risk of premature cognitive closure” as AI becomes integrated into all aspects of life, and taking steps to ensure that uncertainty – in its constructive, curiosity-sparking form – does not become an endangered element of human experience. Uncertainty, in moderation, is not a void to be filled as quickly as possible, but a space of potential – the starting point of questions, explorations, and discoveries that drive individual and societal progress. AI should be a tool to navigate that space, not to seal it off.
Conclusion
Artificial Intelligence is transforming the landscape of human cognition, offering tools that provide instant answers and seemingly limitless information. This paper has explored the proposition that with these advances comes a peril: the death of uncertainty in daily cognitive life, and with it the risk of premature cognitive closure. Synthesizing insights from neuropsychology, PNEI, and emerging empirical evidence, we find that the relationship between AI and human cognition is complex. AI’s instant certainty can indeed hinder cognitive development and flexibility – but it need not do so inevitably. The outcome depends on how individuals, educators, and societies choose to integrate AI into cognitive routines.
We have seen that uncertainty is a fundamental engine of cognitive growth. It sparks curiosity, drives the quest for knowledge, and forces the brain to build adaptive strategies. When AI removes too much uncertainty, offering ready-made conclusions, it can short-circuit these vital processes. The risks include reduced memory retention (outsourcing recall to digital devices), attenuated critical thinking (accepting answers without analysis), diminishing creativity (relying on AI suggestions and losing confidence in one’s own ideas), and lowered tolerance for ambiguity (becoming anxious or helpless without a clear answer). In neuropsychological terms, chronic reliance on AI can promote cognitive rigidity and shallow processing – traits counterproductive to learning and problem-solving. In PNEI terms, it may deprive individuals of the minor stresses that inoculate against larger stress, potentially weakening resilience.
Our case studies vividly illustrate these phenomena. We discussed digital amnesia, where constant access to external memory (search engines, smartphones) leads people to forget information they assume they can just look up – a modern cognitive atrophy of sorts. The GPS navigation example showed how offloading spatial tasks to AI can degrade internal map-building skills and even affect the brain’s navigation circuitry. In education, we saw contrasting scenarios: AI used as a replacement for thinking correlated with poorer learning outcomes, whereas AI used as a catalyst for deeper inquiry could enhance critical and creative thinking. This duality underscores that AI itself is not a malevolent force eroding cognition; rather, it is a powerful amplifier that can either augment or undercut human intellect depending on usage. Finally, we examined the risk of dogmatism via AI, noting that uncritical acceptance of AI outputs can reinforce biases and reduce intellectual humility – a societal-level cognitive risk in the age of algorithmic information streams.
What then are we to do in light of these findings? The answer is not to reject AI – that genie is well out of the bottle, and rightly so given AI’s immense benefits across fields. Instead, the challenge is to adapt our cognitive practices and educational systems to ensure that uncertainty, curiosity, and critical thinking remain central in the AI era. Several recommendations emerge:
In closing, the risk of premature cognitive closure in the age of AI is real, but it is not a foregone conclusion. It is a risk that comes with the incredible power of AI to provide certainty and information. Like all powerful tools, the key lies in how we wield it. Historically, humanity has faced similar concerns with the printing press (“will we still remember stories?”), calculators (“will we lose number sense?”), and the internet (“will we stop going to libraries or knowing facts?”). In each case, adaptation and conscious effort preserved core human cognitive capacities, even as we offloaded some tasks to new technology. There’s evidence we can adapt again: we might find that new cognitive skills emerge (for example, prompt engineering and the ability to use AI effectively could become valued aptitudes, just as searching the internet and discerning quality sources did).
The overarching conclusion is that curiosity, critical thinking, and flexibility must be actively safeguarded and cultivated in tandem with AI’s integration. Uncertainty should be reframed not as an inconvenience to eliminate, but as a space for imagination and growth. If we can achieve this balance, AI and human cognition can enter a symbiotic relationship: AI handling routine or computational tasks to free human minds for creative, analytical, and interpersonal thinking – areas where uncertainty and nuance are rich and where human cognition truly shines. In such a future, rather than heralding the death of uncertainty and the stagnation of thought, AI could become a catalyst for an intellectual renaissance – but steering toward that outcome will require wisdom, humility, and yes, a healthy appreciation for the productive role of uncertainty in our lives.
References
Agus, M. L., Terzi, S., Soro, A., Fraschini, M., & Ghilardi, M. F. (2023). Navigating with GPS affects hippocampal activity and spatial memory: A longitudinal study of habitual GPS use. Scientific Reports, 13(1), 1402.
Ali, L. A. E. H., Ali, M. I. E. B., Ali, S. M. A., & Nashwan, A. J. (2025). Smartphone dependency, digital amnesia, and somatic symptoms among nursing students: The challenge of artificial intelligence. BMC Nursing, 24(1), 599.
Dahmani, L., & Bohbot, V. D. (2020). Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Scientific Reports, 10(1), 6310.
Essel, H. B., Vlachopoulos, D., Essuman, A. B., & Amankwa, J. O. (2024). ChatGPT effects on cognitive skills of undergraduate students: Receiving instant responses from AI-based conversational large language models (LLMs). Computers and Education: Artificial Intelligence, 6, 100198.
Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6.
Jose, B., Cherian, J., Verghis, A. M., Varghise, S. M., Mumthas, S., & Joseph, S. (2025). The cognitive paradox of AI in education: between enhancement and erosion. Frontiers in Psychology, 16, Article 1550621.
Rademacher, L., Kraft, D., Eckart, C., & Fiebach, C. J. (2023). Individual differences in resilience to stress are associated with affective flexibility. Psychological Research, 87(6), 1862-1879.
Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Science, 333(6043), 776-778.
Williamson, S. M., & Prybutok, V. (2024). The Era of Artificial Intelligence Deception: Unraveling the Complexities of False Realities and Emerging Threats of Misinformation. Information, 15(6), 299.
Xu, Y. (2025, April 15). AI’s Impact on Children’s Social and Cognitive Development [Audio podcast interview]. Children and Screens Institute. (Interview with Dr. Ying Xu discussing how AI tools may affect children’s critical thinking and learning habits.)
About
"Dr. Del Valle is an International Business Transformation Executive with broad experience in advisory practice building & client delivery, C-Level GTM activation campaigns, intelligent industry analytics services, and change & value levers assessments. He led the data integration for one of the largest touchless planning & fulfillment implementations in the world for a $346B health-care company. He holds a PhD in Law, a DBA, an MBA, and further postgraduate studies in Research, Data Science, Robotics, and Consumer Neuroscience." Follow him on LinkedIn: https://lnkd.in/gWCw-39g
✪ Author ✪
With 30+ published books spanning topics from IT Law to the application of AI in various contexts, I enjoy using my writing to bring clarity to complex fields. Explore my full collection of titles on my Amazon author page: https://www.amazon.com/author/ivandelvalle
✪ Academia ✪
As the 'Global AI Program Director & Head of Apsley Labs' at Apsley Business School London, Dr. Ivan Del Valle leads the WW development of cutting-edge applied AI curricula and certifications. At the helm of Apsley Labs, his aim is to shift the AI focus from tools to capabilities, ensuring tangible business value.
There are limited spots remaining for the upcoming cohort of the Apsley Business School, London MSc in Artificial Intelligence. This presents an unparalleled chance for those ready to be at the forefront of ethically-informed AI advancements.
Contact us for admissions inquiries at:
UK: +442036429121
USA: +1 (425) 256-3058