When marketing tools get confused (and it’s all your fault).
Ever worked hard to make an AI tool bulletproof, only to watch it hallucinate anyway? If you’ve uploaded decks, CVs, case studies or brand stories to your trusted AI assistant, you know the feeling. I’ve done the same for my consultancy and clients. Sometimes, instead of getting smarter, these tools get more confused. One of my earliest AI lessons: more input and source material does not automatically mean a better output.
I thought I’d post a few thoughts on why this happens, and how we lay users might work around it. If you’ve run into this too, please share your stories in the comments below!
AI as a Mirror, Not a Mind. With the flood of artificial intelligence tools at our fingertips, we can’t deny how well they process patterns and reproduce language at scale. But aren't you seeing how AI still struggles to understand context, nuance and prioritization? That gap matters, especially in critical situations. Think about legal briefs filed in courtrooms. What slips through when no human catches it? The promise is that models will improve, but right now, only humans connect certain dots that truly matter.
AI can process text, but it doesn't truly 'know' a language the way a person does."
- Veena Dhar Dwivedi, Centre for Neuroscience and Professor at Brock University (full study in comments below, thanks Dr. Dwivedi!)
A Personal Example. When I uploaded multiple versions of my marketing background to build my most recent bio, small contradictions began to creep in. My perky (and patient) homemade AI assistant, despite being told to use sources directionally, couldn’t decide which version was the weightiest signal and which was just an old draft. My theory: the more versions I fed it, the more mistakes and hallucinations I got back. It was like the tool saw multiple variations and assumed precision was flexible. Seasoned AI users know this well, but many smart early adopters still learn it the hard way. It shows up in hiring workflows, pitch decks and strategy docs. Messy inputs mean more hallucinations and less depth and precision.
Some Recent Proof Points.
Apple researchers find ‘major’ flaws in AI reasoning models ahead of WWDC 2025 (Jun 9, 2025): Classic puzzles like the Tower of Hanoi which test step-by-step logic and planning reveal how LLMs break down under layered reasoning. More data doesn’t fix it.
Why Superintelligent AI Isn’t Taking Over Anytime Soon (WSJ, Jun 14, 2025): Apple’s paper is making the rounds. It shows that as complexity rises, models sometimes abandon accuracy altogether. More input can push precision out of reach.
AI Can Do the Work. It Still Can’t Be Human (Lifewire, Jun 10, 2025): Cisco’s leaders point out that while AI boosts productivity, only humans bring perspective and judgment. Context shapes meaning: your fifth birthday and your eighty-fifth are both milestones, but very different ones.
The real need, and the good news for marketers? Be a curator.
Smart operators don’t dump everything into AI. They choose clean source material, label drafts and shape the final narrative. They organize and activate the human filter AI cannot and reinforce that with good old-fashioned original writing and curation. Marketers and comms experts are wired for this—we’re trained to tailor messages for data-backed trends, culture and shifting audience mindsets.
When Liftoff Enterprises engages with clients, we increasingly guide them to assess their people and culture readiness before chasing shiny AI tools. Tools-first adoption (affectionately referred to in some agency circles as ‘GMOOT’, or ‘Get Me One Of Those,’ says the CMO) can bring some real risks: too much input creates confusion and contradiction that teams waste time fixing. Employers and Recruiters, please take note: the right question for talent and teams might not be “What have you done with AI?” but rather “How do you think about using and shaping outputs with AI?” The operator mindset can be a true edge.
A Few Suggestions. If you’re using AI to help tell a story:
Pick one source of truth and note it clearly. Create instructions for your agent or as a prompt for your project chat around this source of truth. Test those instructions by asking questions. This goes for CVs, presentations or reports.
Label drafts and experiments for your tool or agent. Taxonomy matters. Filenames like _FINAL, _DRAFT or _BKGRND help the AI mirror know which version to trust, if you tell it. “Hey agent, ‘final’ is a source of truth, ‘draft’ has ideas you might leverage, but check them against the source of truth, etc.”
Keep a simple control doc for yourself and your AI to guide tone and priorities. And above all, be human. Ask someone else to read your draft and offer fresh perspective.
One Last Thought. AI won’t replace you. But a thoughtful human who knows how to use it well just might. How do you curate your inputs? What helps your AI work smarter, not just faster?
#AI #Leadership #Marketing #HumanFirst #OperatorMindset
SOURCES:
1️⃣ Veena Dwivedi (quoted within), “Can AI really ‘understand’ human language? A neuroscientist says no, but the reasons might surprise you.” Hindustan Times, June 13, 2025.
2️⃣ “Apple researchers find ‘major’ flaws in AI reasoning models ahead of WWDC 2025.” Times of India, June 9, 2025. Highlights how classic logic puzzles still stump LLMs, regardless of more training data.
3️⃣ “Why Superintelligent AI Isn’t Taking Over Anytime Soon.” (paywall, apologies) The Wall Street Journal, June 14, 2025. Shows how Apple’s study is making headlines, reinforcing that more input and complexity can push models past their reasoning limits.
4️⃣ “AI Can Do the Work. It Still Can’t Be Human.” Lifewire, June 10, 2025. Cisco and other tech leaders discuss why human judgment, context, and empathy are irreplaceable.
Founder, Powerful Steps® | Global Brand & Leadership Strategist | Architect of Women’s Leadership Legacies | SXSW Evaluator | Speaker | Author + Streaming Project | Former CEO, TORSTAR
2moLove this, Andy — and couldn’t agree more. We teach leaders to curate inputs with intention so their AI tools reflect truth, not just noise. The power isn’t in the prompt alone — it’s in the clarity, humanity, and leadership behind it. Yes to staying human and keeping our tools honest. #PowerfulSteps #ToryArchbold #WomenInLeadership #LegacyLeadership #BusinessAttraction #LeadWithPurpose #AILeadership
Marketing & Business Executive | Fractional CMO & Board Advisor | TV Host “Liftoff” | AIR™ Method
3moha! love this - I just wrote about AI as well. #greatminds
Customer Success Leader | NRR Expert | Helping Fortune 500 and Startup Customers Deliver Value
3mo“AI won’t replace you. But a thoughtful human who knows how to use it well just might.” Brilliant and this has been an axiom with many technological disruptions in my lifetime. Well done Andy Goldman.
AI-Enabled Brand and Demand | Product Marketing Strategy | Creative Leadership
3moGreat advice. I've had very similar experiences. The more I lean into it and push it, the worse it gets and I end up starting from scratch and writing it myself (maybe a good thing 🤣) Very useful as a thought validator, sidekick for feedback and proof reader, and creative springboard. But real, trusted feelings, empathy? Not yet.
Senior Director, Global Gen AI and Consumer Data Strategy at MondeIēz International
3mo#wisdom from Andy Goldman and I 100% agree. Spending past year knee deep in all things (Gen)AI…I’ve learned that it’s very easy to get started, but even easier to get sidetracked. Really need patience and to keep trying new things. Efficient yes. Very cool yes. Perfect No. Humans Matter.