When Research Gets Weaponised: Why That "Definitive" Anti-AI Study Doesn't Say What You Think It Does

When Research Gets Weaponised: Why That "Definitive" Anti-AI Study Doesn't Say What You Think It Does

I've been 'diving deep' (see what I did there!) into a research paper that's been making waves across LinkedIn and education circles: "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking." This 28-page study has been wielded like a weapon by AI critics, with posts claiming it provides "definitive proof that AI causes cognitive degradation" and that we "shouldn't be using AI in classrooms."

But here's the problem: the people sharing these definitive statements seem to have missed what the study actually says—and more importantly, what it doesn't say.

The Academic Irony: When Scholars Abandon Scholarship

What's particularly troubling is watching academics—people who should know better—engage in thoroughly non-academic practices when sharing this research. I've seen respected figures with significant followings reduce nuanced findings to inflammatory soundbites, completely abandoning the rigorous analysis they'd demand from their own students.

The worst example? A lovely little graphic I encountered showing two anthropomorphised brains in a gym—the one without AI assistance clearly positioned as the more "buff" and strong brain, while the AI-assisted brain appeared weak and flabby. This visual shorthand is pure rhetorical fallacy, designed for clicks rather than meaningful discussion.

This kind of dumbed-down messaging is poisoning our professional discourse. Yes, it's great for engagement metrics, but it's terrible for the thoughtful analysis our profession desperately needs.

The Study: Actually Good Research with Important Limitations

Let me be clear from the outset: this is solid research within its scope. The study examined 666 UK participants using mixed methods, employed established assessment tools, and followed proper ethical procedures. The researchers found a strong negative correlation (r = -0.68) between self-reported AI usage and self-reported critical thinking abilities.

That's interesting. It's also far from the smoking gun that critics are claiming it to be.

What the Study Actually Shows vs. Media Claims

The Reality

  • People who report using AI more also report lower confidence in their critical thinking

  • Statistical relationships exist in self-reported perceptions

  • Qualitative themes align with quantitative patterns

  • Cognitive offloading appears to mediate the relationship statistically

What It Cannot Support

  • Definitive causation: That AI use directly causes cognitive decline

  • Objective decline: Actual deterioration in critical thinking performance

  • Generalised effects: Findings beyond UK self-reporting populations

  • Mechanism validation: That cognitive offloading actually occurs as theorised

The Critical Finding Everyone's Ignoring: Education as a Protective Factor

Here's what's particularly frustrating about the weaponisation of this study: critics are completely overlooking one of its most important findings. The research actually demonstrates that education serves as a protective factor against potential negative effects of AI usage.

The Quantitative Evidence

  • Higher educational attainment was associated with better critical thinking skills regardless of AI usage

  • Education moderates the AI-critical thinking relationship (β = 0.02, p = 0.046)

  • Higher education levels appear to mitigate negative effects of AI tool usage

  • Significant differences in cognitive engagement were observed across education levels

The Qualitative Evidence

The study reveals stark differences in how people with different educational backgrounds approach AI:

Higher-educated participants:

  • More aware of AI tool limitations

  • More likely to cross-check AI-provided information

  • Greater scepticism toward AI outputs

  • Quote: "I always cross-check AI recommendations because I know it's not always accurate" (Master's degree holder)

Lower-educated participants:

  • More likely to accept AI outputs uncritically

  • Quote: "I don't have the time or skills to verify what AI says; I just trust it" (High school graduate)

Age Effects: A Cohort Story

The study also found significant age-related patterns:

  • Younger participants (17-25): Higher AI usage, more cognitive offloading, lower self-reported critical thinking

  • Older participants (46+): Lower AI usage, less cognitive offloading, higher self-reported critical thinking

Crucially, this could reflect cohort effects, digital nativity, or life experience rather than pure age effects—but critics are ignoring this nuance entirely.

Why This Finding Changes Everything

This educational protective effect isn't just interesting—it's the key insight that completely undermines the "ban AI from classrooms" argument.

The Actionable Insight

Critical thinking education appears protective against cognitive offloading. This suggests that interventions focused on critical thinking training could mitigate risks, pointing to educational rather than technological solutions.

The problem isn't AI per se—it's the lack of critical evaluation skills.

Policy Implications

  • Supports investment in critical thinking curricula

  • Highlights the importance of media literacy and AI literacy education

  • Points to differential impacts based on educational background

  • Suggests we need more education, not less technology

The Critical Methodological Weaknesses

1. The "AI Tools" Problem

The study treats "AI tools" as a monolithic concept, lumping together ChatGPT for academic work with Netflix recommendations, Amazon searches, and casual interactions with Alexa. This is like studying "vehicle usage" by combining Formula 1 drivers with people who occasionally ride the bus.

Someone using Spotify algorithms gets grouped with someone relying on ChatGPT for all their writing tasks. The lack of differentiation by task context, usage depth, or sophistication renders the "AI usage" measure practically meaningless.

2. Cross-Sectional Design with Self-Report Data

This creates a double weakness. Cross-sectional design cannot establish causation—we don't know if AI causes lower critical thinking or if people with lower confidence in their thinking gravitate toward AI tools.

More problematically, all data is self-reported with no objective validation. We have no idea whether people's self-assessment of their critical thinking abilities correlates with actual performance.

3. Uncontrolled Confounds and Demand Characteristics

The study fails to address alternative explanations:

  • Pre-existing cognitive differences driving both AI adoption and self-perception

  • Personality factors affecting technology adoption

  • Professional requirements for AI usage

  • The fact that participants knew they were in a study about "AI and critical thinking"

When you tell people you're studying AI's impact on thinking, you're practically guaranteeing biased responses.

The LinkedIn Problem: From Nuance to Soundbites

What frustrates me most isn't the study itself—it's how it's being weaponised. I've watched people with 20,000+ LinkedIn followers, writing for well-respected publications, use this research to make definitive statements like "AI shouldn't be in classrooms" based on correlation data from a convenience sample.

They're completely ignoring the finding that education is protective—which actually supports the case for thoughtful AI integration with proper training, not AI avoidance.

This represents the worst kind of rhetorical fallacy: taking a nuanced piece of research and boiling it down to "I don't like this, therefore you shouldn't like this."

What We Actually Need for Evidence-Based Conclusions

To move beyond correlation and establish causation, we need:

  • Longitudinal designs tracking individuals over time

  • Experimental manipulation of AI usage with control groups

  • Objective measures of cognitive performance, not just self-reports

  • Precise specification of AI usage types and contexts

  • Control for confounding variables and demand characteristics

The Bigger Picture: Moving Beyond Binary Thinking

Here's what's particularly troubling about this weaponisation of research: it shuts down the nuanced discussion we desperately need about AI in education.

The study's most important finding—that education serves as a protective factor—actually supports thoughtful AI integration rather than AI avoidance. Yet critics are ignoring this entirely.

As educators, we should be:

  • Teaching students to use AI tools thoughtfully and critically

  • Understanding the difference between AI as a crutch and AI as a cognitive amplifier

  • Developing media literacy skills to evaluate research claims

  • Having nuanced discussions about appropriate AI integration

  • Investing in critical thinking education as the real solution

A Warning to the Stick-Wielders

Here's some friendly advice for those attempting to use studies like this as a stick to beat educators with: be very cautious. You never know—we might just take that stick off you and turn it on you ourselves.

We're trained to analyse arguments, spot logical fallacies, and evaluate evidence. When you misrepresent research to support predetermined positions, we notice. And we're not afraid to call it out.

A Call for Better Discourse

This study is valuable. It adds to our understanding and raises important questions. But when people with significant platforms—especially academics who should know better—misrepresent research findings whilst ignoring crucial findings about education's protective role, they damage the quality of our professional discourse.

The research actually supports what many of us have been arguing: the solution isn't to ban AI from education, but to improve critical thinking education and AI literacy. Education is the protective factor, not the problem.

We can do better. We must do better.

The complexity of AI's impact on human cognition deserves rigorous research and thoughtful analysis, not weaponised soundbites and misleading graphics designed to shut down discussion.

Let's commit to engaging with the actual evidence—all of it—acknowledging limitations, and having the nuanced conversations our students and profession deserve.

#AIinEducation #EdTech #ResearchLiteracy #CriticalThinking #EducationalResearch #AIliteracy #TeacherProfessionalDevelopment #EvidenceBasedEducation

Here’s a link to the study in question:

Gerlich, Michael. 2025. "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking" Societies 15, no. 1: 6. https://doi.org/10.3390/soc15010006

John Dolman

The AI English Teacher - Teacher of Media Studies @ Ponteland High School. Former Head of Languages and Cultures Faculty @ PRINCE OF WALES ISLAND INTERNATIONAL SCHOOL | MEd, AST.

3mo

Here's the original paper (it's in the article but I know many folk will click past this.) Gerlich, Michael. 2025. "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking" Societies 15, no. 1: 6. https://doi.org/10.3390/soc15010006

Patrick Dempsey

AI change leadership system: Free AI Change Profile → Team Breakdown → Workshop Unlock

3mo

This makes perfect sense. AI literacy is, in any robust sense, just subject matter expertise. If I use an LLM outside of my domain expertise, I would have no idea when to "fact-check" or question something. I would also be significantly less successful having it produce something meaningful outside my discipline.

To view or add a comment, sign in

Others also viewed

Explore content categories