Type‑Safe Python & LLM Style Guides (how I guide my GenAI coding partner)
Collaborating with LLMs has become a key fixture of my development workflow. Over the past year, I've being using the GenAI tools (namely Claude) like a developer on my team, but with a twist.
Instead of expecting the AI to automagically know my coding preferences, I provide it with very explicit and detailed guidance documents.
In this newsletter, I want to share how I collaborate with those LLMs using guides to my coding style and to my runtime type-safe Python framework.
Hopefully this will show how this approach yields tremendous value and productivity gains (which it does for me)
TLDR:
Read the guidance documents:
Why my LLMs are given multiple 20x pages guidance documents
When I start a session with Claude, ChatGPT, Cursor, Lovable or something else, I don't just throw a task at it and hope for the best. I first share a tailor-made guidance document that captures my coding standards, project context, and even testing philosophy.
Think of it as onboarding the LLM to my team.
I always start with the “Type_Safe & Python Formatting Guide for LLMs” (version 2.90.1, updated 7th Sept 2025) which I provide to the GenAI models as reference and guidance.
This guide covers two crucial aspects of my coding style:
Equipping the LLM with such a detailed brief has a powerful effect. Suddenly, the AI’s code suggestions stop feeling like outputs from a distant generator and start reading like code I wrote (maybe something I would have written a week/month ago, i.e. code that still looks very similar and familiar to me).
The functions come back with my naming conventions, the classes follow my patterns, and even the unit tests (which I always ask the LLM to write for every class/method it creates) exhibit the structure I expect. In short, the LLM aligns with my approach. This upfront investment in guiding the AI pays off by dramatically reducing edit cycles, misunderstandings and cognitive load when onboarding this new code into my codebase.
Type‑Safe Python: Catching Errors at Runtime
Let's talk about Type_Safe in a bit more depth, because it's the backbone of how I code in Python now.
In traditional Python, you might sprinkle type hints in your code and use static analysis, but unfortunately at runtime, Python doesn't enforce those types. The Type_Safe framework (part of the OSBot-Utils open source Python package available on pypi) turns that idea on its head, since it enforces types during execution, acting as a safety net that catches mistakes early.
For example:
This nuanced approach means the code is robust in the face of real-world data.
Data rarely comes in perfectly typed; there are strings where there should be numbers, missing fields, extra fields, etc. Type_Safe acknowledges this reality and handles it gracefully by converting types whenever possible, rather than just throwing an error.
Recommended by LinkedIn
Another huge benefit of Type_Safe is the rich set of domain-specific types it provides. The framework includes dozens of specialised type safe primitives for common domains and data formats. I have safe types for things like emails, URLs, IDs, money values, even specific ones for LLM prompts or GitHub repository names. Each comes with built-in validation rules, length limits, and sanitisation logic out of the box.
These become my first line of defence against bad data and even security issues!
For example, if I use Safe_Str__URL for a string that should be a URL, I immediately get an exception if someone tries to assign an invalid URL or one with forbidden characters. If I use Safe_Id for identifiers, it’ll strip and sanitise disallowed characters. By the time my code uses these values, I have high confidence they’re well-formed and safe.
From a testing perspective, this runtime type enforcement flips the script on how I write tests. Instead of writing tests that assume types are correct, I actively write tests to ensure Type_Safe is catching what it should.
On testing, the guidance I give to the LLMs is the 23 page Type_Safe Testing Guidance document, which shows patterns like always testing both the "happy path" (correct types) and the conversion cases (feeding in strings that should become ints, etc.). It emphasises that each Safe type’s constraints must be verified, because these constraints encode business rules and security checks.
By instructing the LLMs with this detailed testing guide, it helps them to write tests that follow my philosophy: for example, verifying that assigning a too-long string to a Safe_Str__Name field triggers a validation error, or that two different objects don’t share mutable state by accident.
In fact, a super powerful feature is that all collection types are converted to Safe collections (lists to Type_Safe_List, dicts to Type_Safe_Dict, etc.), which ensures no two objects ever silently share the same list or dict reference and we still have runtime type safety checks on Lists, Dicts, Tuples and Sets. This eliminates entire classes of Python bugs, since for example these specialised classes will safely handle scenarios where an item with the wrong type is added to a list.
By coding with Type_Safe and teaching the LLM about it, I get safer code and I catch issues at runtime that would otherwise lead to painful debugging sessions.
The LLM effectively becomes a peer reviewer that never forgets the rules I’ve laid out.
Integrating AI Guidance into My Workflow (The IFD Approach)
This approach of providing extensive guidance to the LLM is part of a broader methodology I practice, called Iterative Flow Development (IFD). In IFD, the idea is to build software in small, continuous iterations with a tight feedback loop, something that an AI assistant is remarkably good at doing.
Rather than writing a giant spec upfront, I work flow by flow, often asking the LLM to help implement the next slice of functionality, then immediately run and test it. The guidance docs act as the steadying hand that keeps these rapid iterations on track. They ensure that even as we iterate quickly, the code’s style and quality remain consistent.
To paint a quick picture of a typical day with this workflow:
This tight loop continues, very much in line with IFD: small change, instant feedback, next iteration.
The AI is not just coding; it's learning my project’s idioms as we go (all within the context of one long chat thread).
From LLM Novice to AI Pair Programmer
Looking back, adopting this guided LLM collaboration approach has been crazily productive. In the early days, using an LLM felt like trying to drive a high-powered car that had a mind of its own, sometimes it took me where I wanted, other times it veered off course.
But now, the detailed guidance documents I provide, became the map and rules of the road.
By clearly stating “here’s how we do things around here”, I turned the LLM into a reliable pair programmer that writes in my voice and my coding style. It’s personal and professional at the same time:
In summary, the combination of a runtime type-safe framework and a comprehensive AI orientation guide has unlocked a new level of productivity for me. It’s like having a colleague who instantly adopts your best practices from day one.
And it really works for me :)
Thank you Dinis for sharing your awesome research !
Denis - you are exactly spot on. I’m sincerely not trying to sell on your thread. I just want to confirm that what you’re doing is exactly spot on. I spent the last 2 years building prompt packs. Check out https://manicode.com/ai/ I’m with you dude.