Type‑Safe Python & LLM Style Guides (how I guide my GenAI coding partner)
Dall-E

Type‑Safe Python & LLM Style Guides (how I guide my GenAI coding partner)

Collaborating with LLMs has become a key fixture of my development workflow. Over the past year, I've being using the GenAI tools (namely Claude) like a developer on my team, but with a twist.

Instead of expecting the AI to automagically know my coding preferences, I provide it with very explicit and detailed guidance documents.

In this newsletter, I want to share how I collaborate with those LLMs using guides to my coding style and to my runtime type-safe Python framework.

Hopefully this will show how this approach yields tremendous value and productivity gains (which it does for me)

TLDR:

Read the guidance documents:


Why my LLMs are given multiple 20x pages guidance documents

When I start a session with Claude, ChatGPT, Cursor, Lovable or something else, I don't just throw a task at it and hope for the best. I first share a tailor-made guidance document that captures my coding standards, project context, and even testing philosophy.

Think of it as onboarding the LLM to my team.

I always start with the “Type_Safe & Python Formatting Guide for LLMs” (version 2.90.1, updated 7th Sept 2025) which I provide to the GenAI models as reference and guidance.

This guide covers two crucial aspects of my coding style:

  • A Specialised Python Formatting Style: I favour visual alignment and dense, context-rich code over strict PEP-8 compliance. The guide explains that my formatting "uses vertical alignment to create visual lanes that make code structure immediately apparent". In practice, this means related code elements line up vertically, making patterns and anomalies pop out instantly during reviews. It might look unconventional, but it recognises that code is read far more often than written, so optimising for human pattern recognition pays off.
  • The Type_Safe Framework Principles: I’ve built my Python code around Type_Safe, a runtime type-checking framework that enforces strict type constraints during execution. Unlike Python’s normal type hints (which are ignored at runtime), Type_Safe actually validates every operation and even auto-initialises attributes with safe defaults. There is also a critical principle from the guide: Ban Raw Primitives. I never use raw str, int, or float for class fields; instead I use Safe variants. The guide bluntly states: “NEVER use raw str, int, or float in Type_Safe classes... Raw primitives enable entire categories of bugs and security vulnerabilities.” By sharing this rule with the LLM, it knows to avoid using bare primitives and to prefer my library’s safer types.

Equipping the LLM with such a detailed brief has a powerful effect. Suddenly, the AI’s code suggestions stop feeling like outputs from a distant generator and start reading like code I wrote (maybe something I would have written a week/month ago, i.e. code that still looks very similar and familiar to me).

The functions come back with my naming conventions, the classes follow my patterns, and even the unit tests (which I always ask the LLM to write for every class/method it creates) exhibit the structure I expect. In short, the LLM aligns with my approach. This upfront investment in guiding the AI pays off by dramatically reducing edit cycles, misunderstandings and cognitive load when onboarding this new code into my codebase.

Type‑Safe Python: Catching Errors at Runtime

Let's talk about Type_Safe in a bit more depth, because it's the backbone of how I code in Python now.

In traditional Python, you might sprinkle type hints in your code and use static analysis, but unfortunately at runtime, Python doesn't enforce those types. The Type_Safe framework (part of the OSBot-Utils open source Python package available on pypi) turns that idea on its head, since it enforces types during execution, acting as a safety net that catches mistakes early.

For example:

  • if I have a Schema__User class and I assign user.age = "25" (a string instead of an integer), Type_Safe will attempt to auto-convert that string to an integer (and wrap it in a Safe_UInt type) on the fly.
  • If the conversion makes sense, it does it and validates the value.
  • If the value is out of bounds or invalid, it raises a clear ValueError;
  • if it’s something that simply can’t be converted (like assigning a dict to an int field), it raises a TypeError.

This nuanced approach means the code is robust in the face of real-world data.

Data rarely comes in perfectly typed; there are strings where there should be numbers, missing fields, extra fields, etc. Type_Safe acknowledges this reality and handles it gracefully by converting types whenever possible, rather than just throwing an error.

Another huge benefit of Type_Safe is the rich set of domain-specific types it provides. The framework includes dozens of specialised type safe primitives for common domains and data formats. I have safe types for things like emails, URLs, IDs, money values, even specific ones for LLM prompts or GitHub repository names. Each comes with built-in validation rules, length limits, and sanitisation logic out of the box.

These become my first line of defence against bad data and even security issues!

For example, if I use Safe_Str__URL for a string that should be a URL, I immediately get an exception if someone tries to assign an invalid URL or one with forbidden characters. If I use Safe_Id for identifiers, it’ll strip and sanitise disallowed characters. By the time my code uses these values, I have high confidence they’re well-formed and safe.

From a testing perspective, this runtime type enforcement flips the script on how I write tests. Instead of writing tests that assume types are correct, I actively write tests to ensure Type_Safe is catching what it should.

On testing, the guidance I give to the LLMs is the 23 page Type_Safe Testing Guidance document, which shows patterns like always testing both the "happy path" (correct types) and the conversion cases (feeding in strings that should become ints, etc.). It emphasises that each Safe type’s constraints must be verified, because these constraints encode business rules and security checks.

By instructing the LLMs with this detailed testing guide, it helps them to write tests that follow my philosophy: for example, verifying that assigning a too-long string to a Safe_Str__Name field triggers a validation error, or that two different objects don’t share mutable state by accident.

In fact, a super powerful feature is that all collection types are converted to Safe collections (lists to Type_Safe_List, dicts to Type_Safe_Dict, etc.), which ensures no two objects ever silently share the same list or dict reference and we still have runtime type safety checks on Lists, Dicts, Tuples and Sets. This eliminates entire classes of Python bugs, since for example these specialised classes will safely handle scenarios where an item with the wrong type is added to a list.

By coding with Type_Safe and teaching the LLM about it, I get safer code and I catch issues at runtime that would otherwise lead to painful debugging sessions.

The LLM effectively becomes a peer reviewer that never forgets the rules I’ve laid out.

Integrating AI Guidance into My Workflow (The IFD Approach)

This approach of providing extensive guidance to the LLM is part of a broader methodology I practice, called Iterative Flow Development (IFD). In IFD, the idea is to build software in small, continuous iterations with a tight feedback loop, something that an AI assistant is remarkably good at doing.

Rather than writing a giant spec upfront, I work flow by flow, often asking the LLM to help implement the next slice of functionality, then immediately run and test it. The guidance docs act as the steadying hand that keeps these rapid iterations on track. They ensure that even as we iterate quickly, the code’s style and quality remain consistent.

To paint a quick picture of a typical day with this workflow:

  1. I start by describing a feature to the LLM, for example "let’s create a new Schema__Report class for generating some analytics, with fields X, Y, Z")
  2. I also provide the guidance documents, and usually some large blocks of relevant source code (how I do this is a topic for another post)
  3. The LLM writes the class code, complete with Safe types for each field, properly aligned formatting, and the respective unit test.
  4. I copy and paste the code into my app (while reviewing it and making some tweaks where necessary)
  5. I execute the tests and if a type conversion raises an error, I immediately am able to see why it failed thanks to the strict Type_Safe checks.
  6. Maybe I realise I forgot a constraint, so I update the code (or ask the LLM to) and try again

This tight loop continues, very much in line with IFD: small change, instant feedback, next iteration.

The AI is not just coding; it's learning my project’s idioms as we go (all within the context of one long chat thread).

From LLM Novice to AI Pair Programmer

Looking back, adopting this guided LLM collaboration approach has been crazily productive. In the early days, using an LLM felt like trying to drive a high-powered car that had a mind of its own, sometimes it took me where I wanted, other times it veered off course.

But now, the detailed guidance documents I provide, became the map and rules of the road.

By clearly stating “here’s how we do things around here”, I turned the LLM into a reliable pair programmer that writes in my voice and my coding style. It’s personal and professional at the same time:

  • personal: because the style guide captures my hard-earned (opinionated) preferences
  • professional: because the outcome is high-quality and maintainable code

In summary, the combination of a runtime type-safe framework and a comprehensive AI orientation guide has unlocked a new level of productivity for me. It’s like having a colleague who instantly adopts your best practices from day one.

And it really works for me :)

Thank you Dinis for sharing your awesome research !

Denis - you are exactly spot on. I’m sincerely not trying to sell on your thread. I just want to confirm that what you’re doing is exactly spot on. I spent the last 2 years building prompt packs. Check out https://manicode.com/ai/ I’m with you dude.

To view or add a comment, sign in

More articles by Dinis Cruz

Others also viewed

Explore content categories