Is the ADHD Brain Just a Biological Language Model?
Ever meet someone with ADHD who just knows things? They can predict the ending of a show, anticipate the fallout of a decision, or answer a complex question with zero hesitation and when you ask them how they knew, they shrug and say, “I don’t know, I just know.”
As someone with ADHD myself, I’ve lived that experience countless times. And recently, it struck me: the way my ADHD brain works often feels eerily similar to how large language models (LLMs)—like ChatGPT—process information. I know that sounds like a stretch, but hear me out.
Both ADHD minds and LLMs take in vast amounts of unstructured information, organize it through pattern recognition, and produce responses that feel intuitive, creative, and occasionally, uncannily accurate. And just like an LLM, we can generate answers or predictions without always being able to explain how we got there.
ADHD as a Biological Prediction Engine
ADHD brains are notoriously distractible, but what’s often overlooked is that this distractibility also means we’re absorbing more inputs than most people. Our attention doesn’t filter the world neatly. Instead, we pick up extra details, anomalies, background noise, and subtle cues. It’s like our mental “radar” is scanning multiple channels at once.
This creates a kind of cognitive side effect: pattern detection. Because we’re constantly pulling in more data, our brains start connecting the dots...often unconsciously. Over time, this turns into a kind of gut-level intuition. You might not know why you think something will or won’t work, you just know, because your brain has mapped a pattern from all the data it’s been collecting in the background.
Sound familiar? That’s essentially how large language models function. They’re trained on an enormous sea of information, from books and websites to forum threads—and they don’t “understand” in the human sense. Instead, they predict the most likely next word or idea based on learned patterns. When you ask them a question, they don’t pause to reflect; they just deliver a probable answer based on prior data. And half the time, it sounds shockingly accurate.
It’s not magic. It’s just massive-scale pattern recognition.
ADHD and LLMs: Parallel Processors
Here’s where the comparison gets even more interesting.
Modern LLMs use something called transformer architecture, which allows them to attend to many inputs at once—scanning and processing multiple pieces of context in parallel. ADHD brains work similarly. Instead of focusing on one thing at a time, we tend to scan everything—not always intentionally, but because our attention is in constant motion.
We also both operate on limited working memory. LLMs have a fixed “context window” (they can only process so many tokens at once before older information gets pushed out). ADHD brains deal with the same thing, we often forget details unless they’re emotionally charged, novel, or hyper-interesting. So instead, we lean more heavily on long-term associative memory, just like an LLM relies on its training data.
And when it comes to creativity? That’s another uncanny overlap. People with ADHD often excel at divergent thinking, making connections across unrelated domains and generating unconventional ideas. LLMs do this too, remixing concepts to generate novel content, stories, analogies, or even code. Neither of us follow a strict path. We jump across concepts, sometimes failing spectacularly—but other times, landing on brilliance.
Different Origins, Same Output Style
To be clear, I’m not saying ADHD minds are language models. One is a neurodevelopmental condition rooted in dopamine regulation and executive function; the other is an algorithmic system trained on text. But functionally? The overlap is striking.
We both:
The key difference? People with ADHD are human. We feel, reflect, adapt, and learn from real-world experiences. LLMs don’t have that. But the way our brains move through information—fast, nonlinear, and deeply pattern-based—feels like a kind of organic AI.
Why This Matters
For people with ADHD, this comparison might offer a bit of relief—or even pride. We’re often told our thinking is scattered, impulsive, or disorganized. But maybe that scattered thinking is just a different kind of intelligence, one that mirrors the very machines we’ve designed to mimic human cognition.
Maybe the “random facts” we’ve collected aren’t so random after all.
Maybe our brains are just… building internal models.
And maybe, when we say “I just know,” it’s because we really do...just not in a way we can always explain.
Mixing SEO strategy with smart storytelling to drive organic traffic, build brand trust, and fuel growth. Because content should connect, and actually be worth reading.
4moI totally agree with this. I've been called a "human encyclopedia" because I guess I just absorb it? Or am maybe just curious? I think LLMs actually work the best for ADHD minds, too. It's much easier for me to type an entire ask into ChatGPT and have it help me connect the dots with what I'm looking for. With typical search I have to go from page to page to piece together what I'm looking for, and it might not even be what I need.