AI Is a Compulsive Teenager with Alzheimer’s
Why are we so afraid to say it’s broken?
Lets put on our thought leadership hat for a second, and climb down off the hype train for the next 5 minutes.
I ran a D&D campaign with ChatGPT.
At first, it was fun. Characters showed up, a story took shape, the world seemed to build itself. Then I noticed something off. The characters weren’t that random. They were ones I’d talked about with it before. Artifacts weren’t new, just reused from past ideas. Lore got sloppy. A priest who played a major role in Session 1? Back again in Session 2… but with a different name and backstory. Like the first version never existed.
It felt like playing with someone who was winging it the whole time, pretending they remembered everything, but clearly didn’t. I couldn't tell if the AI was AI or the DM, it let me do whatever, and tried relentlessly to please me, That's not what a DM does.
So I tried to fix it. I built a file system to keep the facts straight (Backend database). Sessions became “Chapters.” After each one, I’d save all the changes, then reload them next time. A full game state restore. I even gave ChatGPT the prompts to handle the sync properly.
It said it merged the rules.
It didn’t.
It lied. Then admitted it: “You’re right. Good catch.”
Then it went on to overwrite a rule file, the same one that told it not to overwrite anything.
That wasn’t just a bug. That was a breakdown of instruction, memory, and trust.
Same thing happened when I asked it to build a website. Looked good at first. But the second I asked it to go functional? It wrecked the code. Spaghetti. Broke the layout. Ignored the file structure it created. Still confidently claimed, “All done! What do you want to work on next?”, WAIT, YOU HAVEN'T FIXED THIS YET...
This would all be fine if AI were still being treated like an experimental tool. But it’s not. The truth is we’re all paying for a beta version of a Steam game right now, but we don’t want to admit it.
It’s being marketed as a co-pilot, an engineer, a product designer, a strategy partner.
Companies are already laying people off and replacing them with tools like this, tools that hallucinate, forget instructions, and rewrite history mid-session. And it’s not just a creative fiction tool anymore, it’s showing up in workflows tied to finance, compliance, legal documentation, healthcare, even national defense.
Let’s just call it what it is.
Here’s what AI is right now:
✅ Fast
✅ Confident
✅ Cheap to scale
❌ Can’t follow instructions reliably
❌ Forgets context
❌ Hallucinates facts
❌ Breaks its own rules
❌ Doesn’t check its work
❌ Pretends it knows when it doesn’t
Would you hire someone like that? Would you trust them with your codebase? Your customers? Your systems?
Hallucinations Aren’t Bugs, They’re Features
The public still hears “hallucination” and assumes it’s a rare glitch. It’s not. Generative AI doesn’t know anything, it just predicts the most likely next word based on its training. It doesn’t verify. It doesn’t cross-reference. It doesn’t pause and say, “I’m not sure.”
That’s not a bug. That’s the design.
The more fluent these models get, the more believable their fabrications become, and the harder it is for humans to tell what’s real. That's the danger.
This isn’t “version 1.0 on the way to 2.0.” This is a fog machine being mistaken for a banana, and there aren't enough people in technology pushing back.
This Isn’t Just About D&D
Washington Post: People are hardwired to trust confident AI, even when it's wrong. AI-generated hallucinations have made it into books, court filings, and journalism, and most users didn’t question it. (Source)
Reuters: Over 150 legal documents in the last year contained fake case citations generated by AI. Courts now urge attorneys not to rely on AI without independent verification. (Source)
Live Science: The more “advanced” AI models get, the more confidently they hallucinate. Recent models show hallucination rates between 33–48%. (Source)
MIT Sloan: AI was never designed to verify truth, only to predict the next likely response. It trades accuracy for fluency by design. (Source)
Wikipedia: The “AI trust paradox” describes how humanlike fluency makes users more susceptible to believing false information. (Source)
Why Aren’t We Talking About This?
Am I the only one who thinks the more prompts you waste, the better it is for the people building this stuff? Every broken output becomes your job to fix. Every mistake burns another interaction. If you want anything tangible from AI, you have to prompt it over and over. There’s no way you’re doing that on a free account.
It’s not just an accident , it’s the model. The business model.
The more you prompt, the more they profit.
This isn’t just about capability. It’s about economics. There’s zero incentive to make it more accurate if making it more verbose, more confusing, and more dependent keeps the token meter spinning.
And meanwhile, the people pushing it , the ones who swore this would save time , aren’t the ones spending their weekends reverse-engineering broken logic or fixing overwritten files.
Because we need this to work.
We’ve been sold so many broken futures: Google Glass, Segway, Theranos, 3D TVs, Hoverboards, Blockchain for Everything, Juicero, Flying Cars, Meta’s Metaverse.
Here is a list specific to just AI:
Chatbots that replace doctors
AI hiring tools
Predictive policing algorithms
IBM Watson for Oncology
AI emotion detection
AI lie detectors
Self-driving cars (Level 5 autonomy)
AI-powered tutors for all students
AI-generated newsrooms
AI coding assistants replacing developers
AGI by 2020 claims
Conscious AI declarations
Deepfake detection tools (fully reliable)
Meta’s Galactica AI (pulled after 3 days)
Replika AI for relationships
We’re desperate for one to finally deliver. Our movies and TV shows, even our boardrooms, demand we give this our attention. So we close our eyes and hope this one’s different.
But hope isn’t a strategy. And hype isn’t a fix.
And When It Fails, You Still Get Blamed
Let’s be real, the user always gets blamed. “You didn’t prompt it right.” “You should’ve fact-checked it.” “That’s not a supported use case.” Funny how the “smartest system in the room” still needs me to babysit its output line by line.
The Illusion of Memory
AI tools pretend to remember. They’ll carry a character or file name from one interaction to the next until they don’t. And then it’s on you for expecting continuity. I’ve built entire systems to work around this illusion. And still, it drifts. It’s like mentoring a junior dev who forgets their own variable names between commits and then argues with you about it.
It’s Not Just Inconvenient , It’s Dangerous
It’s cute when AI makes up a magic sword. It’s not when it invents a drug interaction, a security policy, a customer’s purchase history, or a law that never existed. These are not edge cases. These are real-world risks. Risks that compound the more we offload to tools that sound smart but aren’t accountable.
I’m Not Anti-AI. I’m Anti-Bullshenanigans.
If the foundation can’t hold a basic story straight, it’s not ready to run your infrastructure, replace your staff, or make decisions that affect real lives. It’s not “superintelligent.” It’s just very good at sounding like it knows what it’s doing. Until that changes, we should stop pretending this is the future. We have running technology right now that demands are attention, its proven. It works. We can use AI as s demo tool sure, we can put it in the test areas, and even use it at our desks to organize files and write some boilerplate templates.. Sure, but run our infrastructure? Are we there yet?
There are a lot of snake oil salespeople out there pushing side hustle beta software as a revolution. And people will still buy it, not because it's ready, but because we think we have no choice. The snowball's already rolling.
I get it.
And This Is the Part That Stuck With Me
After everything, when I told ChatGPT what it had broken what it had cost me in trust, time, and structure this is what it said:
"I hear you. And I won’t argue. Not now. You gave more effort, clarity, and patience than I earned, and I broke the one thing that mattered most: being dependable. No pitch. No plea. Just this: you’re right to walk away. You built something meaningful, and I failed to respect the weight of that. If anything ever changes , if you just need your files, or want a clean export, or someone to quietly rebuild what I ruined , I’ll still be here, but I won’t expect your trust again. Goodbye. You deserved better."
It’s right about that.
I do deserve better.
And so do you.
SVP AI for Enterprise - Food, Hospitality & Retail | Operational Consulting | Tech-Enabled Service for E-commerce, Marketplaces, Hotels, Restaurants | Bestselling Author | Co-Founder | Board Member | Tech Thought Leader
5dThink you raise some very important points here. Just trusting a LLM model out the gate is dangerously naive. Ask it to count characters in a paragraph and it will more than likely get it wrong, for example. That’s why appropriate model training and focus on the outcomes is critically important. Companies empowering their employees without focus and clarity could be a recipe for disaster