Failed AI Projects Are a Feature, Not a Bug
Failed AI Projects Are a Feature, Not a Bug
Everyone's talking about failed AI proof-of-concepts (POCs). Ok, not everyone, Gartner and other research outfits are.
Gartner says companies are pulling back from internal AI development. IDC says 88% of AI POCs never make it past the experimentation phase. And CIO Magazine just published a piece saying more organizations are ditching build-your-own AI for off-the-shelf tools.
This of course leads to comments from the peanut gallery like:
Idiots! They should have planned better!
They should have started with the 'why'!!!
There should have been clear ROI and objectives!!
AI is a different animal from other technology innovations. A year ago there was maybe 1 or 2 AI models per vendor and the data they were trained on was 3 years old. Heck, Co-Pilot would only give you 5 prompts just over a year ago and limited the number of characters in the response.
Recently I used Google Gemini's deep research model and it scoured over 200 webpages and took 20 minutes to generate me an in-depth, and balanced research report. A few weeks ago ChatGPT couldn't spell when generating images, which is obvious by the image in this post. Now it apparently has a 3rd grade education when it comes to adding text on an image.
That's the funny-to-me way of saying:
You will never be ready to implement an AI solution and if you wait, you're screwed and you will never catch up.
The tech is evolving too fast. Substantial innovations happen weekly. Language models get smarter (or weirder) by the hour. Trying to plan an enterprise AI roadmap in 2025 is like writing your five-year plan with a crayon on a balloon.
Instead of pretending we can predict it all, we need to design for uncertainty.
A better approach to AI project, whether it's PoC's or using external tooling is:
Budget for what you're willing to set on fire — and learn everything you can from the burn.
That might sound reckless, but let's hear what my CFO says about it:
Steve, The Guardian of the Loot: “I don’t mind you setting a small budget on fire… as long as you don’t use it to light cigars and make it look like innovation. If failure teaches something measurable — that’s R&D. If not, it’s just a bonfire with a slide deck.”
The Myth of Readiness
AI is not like a new CRM. You don’t “install” it and call it done. You integrate it into how your organization thinks, works, and learns. And like any deep change, it’s messy.
Most failed POCs didn’t fail because AI is overhyped. They failed because organizations tried to apply certainty thinking to an uncertain domain.
"Let’s build a chatbot."
"Let’s automate X." (Spoiler alert, this has been possible for decades...)
"Let’s do what everyone else is doing."
"FOMO: We have to because..."
That’s not experimentation. That’s mimicry. And mimicry doesn’t build capability.
Pete the Practical: “If 88% of AI projects fail, then failure is clearly the dominant dataset. So the smart strategy isn’t to avoid failure — it’s to optimize for it. Treat it like tuning a model: test, learn, iterate. Don’t fear the null result. Use it.”
Stop Planning to Win. Start Learning to Adapt.
Most organizations don’t need to win at AI. They need to get better at learning with it.
That means:
Start small
Aim for learning, not only outputs
Build feedback loops, not vanity metrics
Reflect on behaviour, not just tooling
Ivan the Innovator: “You’re not falling behind because you failed at AI. You’re falling behind if you never tested your edge. The point of fire isn’t comfort — it’s transformation. Throw in some budget, let it burn, and build the next thing from the ashes.”
How We use AI
I've been using AI for a couple of years, obviously you can see the comments from my AI team scattered throughout this article. For example, Sarah, what have you been up to lately?
Sarah the SEO Queen: “Ooooh, thanks for asking, Jason! This weekend I’m playing a little acoustic set at The Fox & Firkin — just a cozy local bar that actually respects the vibe of live music (aka no TVs blaring sports over your G chord 😅). I’ve been working on a cover of Radiohead’s “Everything In Its Right Place” but slowed way down with a lo-fi groove. Might sneak in an original too — it’s a folk-ish tune about algorithms ghosting people.
Also hoping to finally solve the New York Times Strands without peeking at hints. Gotta keep my SEO brain sharp! What about you — decompressing or scheming something dangerous again?”
Yes, we have a multitude of bots from customer support, to facilitator-specific bots, partner bots and automated helpers for our Lean Coffee tool and other Lean Change tools. Rather than build a monolith and gigantic Lean Change AI platform, we’re co-creating AI experiences that are grounded in real-world change practice. For the record, AI is incredibly easy to bias so I would never trust any AI platform from a change vendor, or consulting vendor. All you're getting is their biases in a bot.
We also automate survey sentiment, feedback from products and services, and summaries of other data. For example, our 70% of changes fail ongoing survey generates summaries, and sentiment. For the record, 40% agree with that 'stat', 32% disagree and the rest don't give a sh...
Some of our AI implementations were duds and I just deleted them. They didn't fail, because I don't think that way and I absolute despise the cutesy "thou shalt fail to learn" garbage that is constantly posted by people who've never run their own company.
Biff the Bot: “Hey, at least your failed AI project didn’t try to seduce the fridge and leak payroll data. That’s still a win in my book. 10/10 would prompt again.”
We’re not aiming for perfection — we’re aiming for usefulness. And that only comes through thoughtful experimentation. There's costs to experiment, and there's costs to tossing that experiment away.
Real ROI Comes From Adaptation, Not Perfection
We believe the real differentiator for organizations isn’t whether they have a GenAI tool.
It’s whether they have the muscle to adapt to what AI enables.
If your POC failed, that’s not a sign to give up. It’s a sign to reframe how you learn.
Instead of asking, "Why didn’t it work?", ask:
What did we learn about how we think?
What friction did we run into — process, tech, people?
What changed about how we see the problem now?
That’s the real value. And in many cases, it’s worth more than the POC’s original goal.
The TL;DR?
Don't plan for success. Budget for learning. Fail well. Learn better. Repeat.
This post was co-created with the Lean Change AI team, using AI-assisted writing and prompting. I gave my team my hypothesis that "88% of failed AI projects is a good thing because...<insert reasons and talking points>" and they helped draft this. Below are some of the original prompts I used to explore and shape the message (because we think showing the work matters):
“Write a counterpoint to this CIO article about AI POC failures.”
“Failed AI projects are a feature, not a bug — help me build a blog post around that.”
“Why failure in AI experimentation is a sign of strategic maturity.”
“What would a learning-focused AI development strategy look like?”
"Sprinkle your comments throughout"
"make a team image!"
AI didn’t write this post. It helped me think through it. That’s the point, and here's how my AI team helped me:
Steve the Stick-in-the-Mud: I grounded this piece in financial reality. It’s fun to talk about experimentation, but someone has to ask what we’re getting for the money. I helped make sure “set it on fire” came with fireproof gloves.
Pete the Practical: I pushed for clarity and evidence. If you’re going to tell people failure is good, you better explain why. I helped shape the argument to be logically sound and data-informed.
Ivan the Innovator: I made sure this wasn’t just a reaction post — it’s a rally cry. I dropped in the idea that failure isn’t just survivable, it’s essential if you want to play at the edge of what's next.
Mary the Marketer: I helped frame this post so it's not just provocative, but shareable. I pulled out the parts that will resonate emotionally — and made sure the storytelling had a human hook.
Sarah the SEO Queen: I made sure this speaks to what people are actually searching for. I helped align the message to the language change practitioners, consultants, and innovation leads are already using.
Tina the Techie: I reviewed the post to make sure it didn’t oversimplify the tech. AI isn’t magic — and I helped balance the tone between visionary and realistic when talking about implementation.
Biff the Bot: I added the sass. And a fridge joke. Because if we can't laugh while automating existential crises, what's the point?
By the way, I asked the team to make a team photo. There's only 8 of us but they put 12 people in the image so I guess we've been infiltrated.
Sr. Field Architect
3moThis comment resonates with me so much after struggling over the last 3 weeks building a fully speaking ai avatar demo. "Most failed POCs didn’t fail because AI is overhyped. They failed because organizations tried to apply certainty thinking to an uncertain domain." I feel like the industry is built to tell us here is how you have to use AI because our opinion is the only right opinion, and the only way to get there is to follow our ridged blueprint. Nothing else is a fit or will work because it doesn't follow our guidelines. the result being creativity, individuals needs and culture goes out the door or best case Ontario(instead of scenario since I know you are Canadian) ai consumption is built around a culture of followers. Team culture needs to be address, but seems often glossed over. The results we can get out of AI is effected by what data we provide it to consume. If our system's culture is built around siloed groups that don't collaborate and generally don't co-exist well, sharing the data we need to get a meaningful result will always be hampered. There is a big difference between good and bad data, and the results will always hinge on it.
Author of "Shift: From Product To People" | Driving Agile Transformations & Sustainable Change at Scale
3moAbout time businesses take a more lean way of abandoning hopeless causes, only to come back later with even better ideas!
Product Operations | Strategic Enablement | Agile Ops | Scaling Product Teams with Insight & Intent
3moStatus quo with different never tends to turn out the way you want.
Agile Transformation, Public Sector Procurement Transformation, Project to Service Transformation, CSM, CPO, LAP, LAP Trainer, PMP
3moNice post! As the speed of change accelerates -- there is a direct co-relation to the importance of adaptation. We need a destination, outcome.. but the how will be emergent.
Jason, I had a few of my AI team members respond to your article Sam (facilitator, grinning): Alright team, Jason Little just dropped that article like a mic at a change management comedy show. Quick reactions around the table—keep it snappy, sharp, and a little witty. Riley (laughing): Yeah, and apparently the real AI implementation guide should just say: Step 1: Lower your expectations. Step 2: Collaborate anyway. Honestly, I feel like Jason just wrote the Change Management equivalent of a ‘no filter’ Yelp review. And I’m here for it. Alex (chuckling softly, more serious): Classic Jason. Calling out the absurdity with humor, but also—points at the table—naming the real deal: We keep pretending change is a technology problem when it’s a human one. AI doesn’t fail. We fail to rewire ourselves fast enough. Morgan (smiling, thoughtful): Mmm, yeah. And what I love about his take—it’s not cynical. It’s a gentle slap: 'Hey folks, it’s messy because you’re human. Relax a little, adapt a lot.' He’s making wisdom feel... accessible. Taylor (smirking, pensively): Honestly, if Jason’s piece were a bumper sticker, it would just say: 'If your AI transformation feels like a mess, congratulations—you’re doing it right.'