The Context Paradox: Why AI Feels Brilliant One Moment and Clueless the Next
AI surprises me. Almost daily. Sometimes it feels like magic: razor-sharp insights, well-argued recommendations, and strategic advice that feels like it came from a seasoned consultant. And then, just minutes later, it messes up a basic task as if it had never seen a project before. Why is this happening - and how can we lower the fluctuation of the performance. The answer is surprisingly simple, but not trivial to achieve.
The good and the bad
In the morning, I uploaded 10 large PDFs and Excel sheets and asked for an analysis: Where should we focus our workstreams? The answer: sharp, structured, and to the point. The LLM synthesized hundreds of pages and offered advice that I would have paid for. A high point.
Later, I asked my AI coding assistant to implement data persistence in a mobile app. It picked Android's SharedPreferences for storing hundreds of course records. Anyone with experience knows this storage is meant for a few user settings, not databases. I asked the CoPilot to fix it. Then it made the same mistake again. Only when I said, "How would you store 1,000 courses in preferences?" did it admit: "Oh, that's a very hacky solution.". What was missing - and what changed it's mind?
That’s when I had my lightbulb moment:
There are only two reasons why an LLM fails: It either never learned it. Or it doesn’t have enough context.
Another example: I asked Gemini to write a LinkedIn post about a hot topic. It responded confidently with polished, generic fluff. No personality. No insight. I didn’t even notice right away – because it sounded right.
And again, the reason was context. It didn’t know about the recent topic. It didn’t know what I had posted before. It didn’t know what I cared about.
Context is King
Modern LLMs can handle huge context windows - up to 10 million tokens (Llama 4). They can search the web. They can tap into project repositories with MCP servers. But they still struggle if we don’t provide the right information.
The hard truth is: most of the important context is not written down.
If we want AI to be a true teammate, we need to treat context as a core part of collaboration.
Context Has Layers
Think of it like this:
Context flows across these layers – from long-term vision to short-term action. And it flows both ways: every decision you make now becomes part of your broader journey.
When we include this layered context, LLMs can align with our goals, reflect our style, and give advice that fits the situation. They stop being clever interns and start becoming trusted collaborators.
Make the Invisible Visible
We need to start writing down what was previously assumed — the kind of knowledge that was once passed on during onboarding chats, architecture reviews, hallway conversations, or hidden deep in people's heads.
Ask yourself and your team:
Writing this down is more than documentation — it's shared context. Once it's explicit, we can teach it to the AI.
Here are some ways to make it actionable:
Context isn’t static — it needs regular updates. A short check-in every week or a sprint retrospective where you update your AI’s context is a worthwhile habit.
The payoff? You stop re-explaining things. You avoid shallow or misleading AI results. And you move one step closer to a truly augmented workflow.
Final Thought
Context is not just background. It’s the operating system of collaboration.
When AI knows what we know, what we value, and what we’re trying to do, it can help us in ways that feel intuitive, aligned, and even human.
And that’s the kind of future I’m here for.
Stay contextual. Christian
business innovation & digital transformation
2moGuter Punkt. Danke fürs teilen.
Startup Founder - Director of Engineering for hire - Computer Science PhD
3moNice post. I really like the context pyramid representation. A trick I sometimes use on a new context is to tell the AI to ask me questions to establish the context. Perhaps it should become a standard feature of AIs, like thinking. It could be solvable just by modifying the system prompt: "Determine if you have enough information about the context and, if not, ask further questions, if yes, provide the answer directly."