Why Your AI Annoys You: It’s Throwing Your Own Chaos Back at You
But what if the real problem is that we just don’t get it?
Reading Haseltine’s article alongside my own work on creative imperfection hit me like a punch. The real issue isn’t that AI is “on the spectrum.” The issue is that we refuse to admit that we’re all a little dysfunctional too.
Let’s be honest: when Eric Haseltine describes his AI obsessing over donkeys despite his reprimands, we all recognize that moment. That frustration with a “colleague” who doesn’t pick up on our subtext, who clings to some irrelevant detail, who takes everything literally. Except this time, the colleague is a machine. And oddly, it bothers us less to accept that a machine can be “different” than to admit that we, ourselves, have our own glitches.
Here’s where it gets interesting: Haseltine’s article frees us from a pressure we didn’t even know we were carrying—the pressure to pretend our organizations are populated by perfectly rational humans working with perfectly intuitive AIs. Spoiler alert: neither humans nor AIs are perfect. And that’s exactly where the opportunity lies.
When Two Perspectives Meet in Imperfection
Haseltine says AI works like an autistic person: literal, concrete, detail-obsessed. My article argues that our imperfections—and our ability to ask for help—are the real engine of innovation.
The bridge between these two views? Stop forcing AI to resemble a “perfect” human and start accepting our own human dysfunctions within this collaboration.
Because honestly, what have we been doing for months? We’ve been trying to tame AI so it thinks like us. We program it to understand our fuzzy metaphors, our contradictory instructions, our last-minute change of mind. And when it spits out a literal answer to an ambiguous question, we claim it “doesn’t get it.”
But what if we’re the ones who don’t get it?
Imperfection as a Common Language
When Haseltine describes his AI obsessing over donkeys, he experiences exactly what I’m talking about: a moment of imperfection that calls for help. His first instinct isn’t to ditch the tool but to try to understand it. That visual hallucination becomes a trigger for deeper reflection on how AI operates.
That’s my point exactly: mistakes—whether human or machinic—aren’t accidents. They’re disguised invitations to collaborate.
But here’s the trap: in our organizations, we still cling to the “it has to work on the first try” mentality. The result? We hide AI bugs just like we hide our own blocks. We do flawless demos with scripted use cases. We avoid the messy questions that might reveal the system’s limits.
Meanwhile, we’re missing the point: those imperfect moments are exactly where innovation is born.
Rethinking Organizational Strategies
Accept AI’s cognitive specificity (and our own).
Haseltine’s approach frees us from the crushing pressure to build “intuitive AI.” If AI works differently, let’s adapt instead of forcing it to change.
But let’s push further: if we also work differently, let’s own it.
Concretely, that means:
Designing structured, explicit prompts (and admitting our instructions are often vague)
Building processes that leverage AI’s literal logic (and our associative thinking)
Training teams to communicate more precisely (and accepting that precision doesn’t kill creativity)
Turning Our Human “Bugs” into Collaborative Assets
My article insists that saying “I don’t know” is an act of courage. Haseltine shows us why: AI excels precisely where we fall short.
We’re vague? AI forces precision. We’re emotional? AI brings structured logic. We have irrational intuitions? AI helps us unpack them methodically.
But this isn’t about compensation. It’s about mutual stimulation. AI doesn’t erase our flaws—it reveals them and turns them into creative springboards.
The Paradigm of Assumed Complementarity
What emerges from both articles is revolutionary: assumed complementarity.
Instead of:
Hiding our human limits
Forcing AI to perfectly compensate for our weaknesses
Maintaining the illusion of individual performance
We could:
Make our modes of functioning explicit
Create collaboration rituals that honor our differences
Transform every “dysfunction” into an innovation opportunity
And here’s the part no one dares to say: this approach is more effective. Not just more human—more effective. Because it unleashes a creative energy our traditional methods suffocate.
Innovation Through Technological Vulnerability
Haseltine notes that some researchers use autism therapy techniques to improve AI. That’s brilliant, but I’d go further: what if we used this understanding to improve human-AI collaboration itself?
When I admit to an AI, “I have no idea for my next article,” I’m not just filling a creative void. I’m creating a co-creation space where:
My human block meets AI’s structured logic
My emotional intuition feeds on its analytical capacity
My chaotic creativity gets organized by its method
That’s the real revolution: stop seeing AI as a tool we use and start seeing it as a partner we dance with.
Toward a Culture of Shared Technological Imperfection
Haseltine’s piece confirms my conviction: the revolution won’t just be technological—it will be cultural.
If AI works like an autistic person, let’s stop treating it like a dysfunctional employee. If we work like imperfect humans, let’s stop pretending to be flawless machines.
True innovation happens when two imperfect systems—human and AI—embrace their quirks and create something neither could produce alone.
But be warned: this shift won’t happen by itself. We need leaders willing to show their limits. Teams willing to ritualize mistakes as creative starting points. Organizations that reward authentic collaboration over solo performance.
The Courage of Authentic Collaboration
In the end, these two articles converge on the same truth: the future belongs to those who dare to share imperfection.
Not by hiding AI’s flaws behind a forced human veneer. Not by masking our own cracks behind a façade of flawless competence.
But by creating spaces where AI’s obsession with detail meets our ability to see the big picture. Where its literal logic fuels our metaphorical thinking. Where its “hallucinations” spark our intuitions.
What if the next revolution wasn’t about making AI more human, but about making us more authentically collaborative?
Because ultimately, the real bug isn’t in AI or in our brains. It’s in our refusal to accept that imperfection is the fuel of innovation.