The Paradigm Shift: From Engineer to Engineer-Using-AI
We’re entering a new era of software engineering - one where engineers no longer write every line of code themselves. Instead, we’re becoming orchestrators of AI-driven development processes.
In this post, I’ll share one of the most important mindset shifts I’ve experienced: letting go of manual coding control and fully embracing AI as the primary engineer in the workflow.
This shift wasn’t just about productivity - it was about learning how AI coding tools like Cursor and LLMs actually work, and why manual interference often causes more harm than good.
Part 1: Shifting from Engineer to Technical Product Owner Mindset
The most profound change in this transition is that I am no longer directly responsible for writing code myself. Instead, I now act as:
The real coder in this setup is the AI tool - and there should be no exceptions to that. Manual interference only weakens the process.
From this point on, the AI must take the lead as the primary engineer.
As long as you let the AI fully run through its workflow - executing the chain of quality checks I described in my previous post - it delivers outstanding results. But the moment you intervene and start writing parts of the code yourself, you disrupt the process. Worse, this disruption isn’t a one-time issue! It recurs every time the AI touches that section of the codebase, especially the parts you wrote manually.
Why does this happen?
Large Language Models (LLMs) generate code based on predictions, not symbolic reasoning. They don’t understand your code the way a compiler or human does — they simply predict likely token sequences based on their training data and the given prompt.
For coding tasks, they typically operate with a low “temperature” setting to maximize determinism - meaning that the same prompt tends to produce the same output. This predictability is helpful for code quality but comes with a critical side effect: LLMs tend to predict the same function names consistently during both function creation and function usage.
However, if you manually create a function with a name that the AI wouldn’t have predicted, it can cause a mismatch - because the AI will likely use its predicted name later on. In these cases, the AI may call a function that doesn’t exist - simply because it expected a slightly different name than the one you chose manually.
This happens more often than you might think, especially in larger projects where not all code is visible to the AI within its current prompt window or context length.
To prevent this, I developed a custom Flake8 plugin using Cursor that flags undefined function calls in Python. Combined with my pre-commit hooks, this ensures that no invalid code (such as calls to non-existent functions) makes it into the codebase.
Just to add a concrete example from my own experience:
I recently let AI (Cursor, using Claude 4) build a CRUD layer for a FastAPI service. Over and over again, it expected a specific GET endpoint - one that didn’t exist and wasn’t even needed in our architecture, since another endpoint already returned the required data.
But Claude consistently predicted calls to that missing endpoint - likely because it had learned this pattern during training and favored a “clean” REST structure with dedicated endpoints.
Eventually, I stopped fighting it. I added the endpoint purely to align with its expectations - even though we didn’t technically need it. Since then, the service has worked far more smoothly. Claude no longer hallucinates missing endpoints, and the overall flow is much more stable.
Sometimes, it’s easier to align with the AI’s learned habits than to force it into exceptions.
My biggest learning from this process:
Let the AI decide function names, class names, and even filenames. Fighting its naming predictions only creates unnecessary friction - because it simply isn’t designed to adapt to your manually invented names.
Final Thoughts: Stop Fighting the System
Many engineers struggle with the idea of giving up control over function names, class structures, or file organization. I get it - I did too. But here’s the hard truth: AI coding tools don’t care about your preferences. They work by predicting what they believe to be the most likely and consistent solution based on your prompts.
If you fight this process by imposing your manual naming conventions or code structures, you’ll constantly introduce friction and bugs - especially as your codebase grows and the AI’s context window becomes a limiting factor.
The smarter move?
Lean into the AI’s strengths. Let it own the code it generates - down to the function names. Once you adopt this mindset, you’ll unlock far smoother, faster, and more reliable coding workflows.
Part 2 of this series is coming soon.
CEO @ XMATICS | AI Enthusiast
1moFor me a rule file with naming conventions helped a lot, to have names that i like
Product @ Chartered Investment & E-SEC
1moTotally agree with your last comment Hagen. The longer you let it (Claude) add to your existing code base the worse its performance becomes (and I am not saying after a couple of hours but many hours). I am still skeptical concerning the brown field case and its ability to meaningfully replace head count. Defining context and rules is key. Even for the case you mentioned.
CTO @ infobud.ai - Chief of Vectors and Data
1moJust to add a concrete example from my own experience: I recently let AI (Cursor, using Claude 4) build a CRUD layer for a FastAPI service. Over and over again, it expected a specific GET endpoint - one that didn’t exist and wasn’t even needed in our architecture, since another endpoint already returned the required data. But Claude consistently predicted calls to that missing endpoint - likely because it had learned this pattern during training and favored a "clean" REST structure with dedicated endpoints. Eventually, I stopped fighting it. I added the endpoint purely to align with its expectations - even though we didn’t technically need it. Since then, the service has worked far more smoothly. Claude no longer hallucinates missing endpoints, and the overall flow is much more stable. Sometimes, it’s easier to align with the AI’s learned habits than to force it into exceptions.
Product Leadership Coach, Certified Professional Co-Active Coach - CPCC: Helping you get promoted into and succeed with your product leadership role. Let's talk!
1moWill this not forever keep code bases in a mental mindset of the past? As function names according to new topics or concepts would never make it into the architecture? Simply because the new developments in the world don't find themselves prominently in the training data. Not a problem when working in a well defined and well understood space, but that code would feel strangely archaic to the person looking at innovative new stuff, no?