How I use AI to set the right guardrails
Scaling micro-frontends sounds great on paper: modularity, autonomy, independent deployments. But once you're past the excitement of the initial setup, reality kicks in. Things start to become more challenging, not because your teams aren’t skilled, but because no one really thought about the cross-cutting concerns.
I’ve seen this happen in micro-frontends, backend services, and any distributed system where multiple teams move fast in parallel. The issues show up between the modules: dependency drift, performance inconsistencies, architectural erosion. And without guardrails, those small misalignments compound quickly.
Whether at DAZN or working with some of the largest Fortune 500 companies in the world, the story is often the same: it’s not the components or the build that slow you down but lack governance.
That’s why I started putting in place simple, practical guardrails, things that help teams move fast without breaking architecture or user experience. And now, with AI code assistants in the mix, setting up those controls is faster than ever. You don’t need a full platform team to enforce good practices—you just need the right approach and a bit of guidance.
Challenges with distributed systems
Cross-cutting concerns get overlooked in the early days and become painful later. I'd argue rightly so, because there are many moving parts and changes to embrace fully in the journey towards a distributed system. However at some point, friction will arise and I found the most common ones are:
1. Dependency management
In distributed systems, even updating a shared library can become a nightmare. You often need to touch multiple projects, trigger new builds, and redeploy apps one by one. If you skip updates, your performance suffers. If you rush them, something breaks. Without automation and visibility, keeping dependencies up to date becomes a time sink—and a silent liability.
2. Architectural drift
You start with a clean architecture, well-defined boundaries, and clear ownership. But over time, those lines blur. Especially when multiple teams are involved. People import things they shouldn't, violate layering, or reimplement logic because they can’t find the shared version. In monorepos or not, architectural erosion is real. And without clear enforcement and visibility, your system gradually becomes harder to evolve.
3. Performance
Everyone agrees performance matters, but when it’s shared across many teams, it’s easy for no one to own it. You end up with bloated bundles, inconsistent loading strategies, or duplicated dependencies—all of which hurt the user experience. Having performance budgets and checks is critical, but they need to be visible, automated, and easy to enforce without blocking progress.
What I’ve Learned from the Trenches
These challenges aren’t hypothetical, I’ve seen them play out in real systems, with real teams, under real pressure. At DAZN, where we built micro-frontends, we implemented some of these guardrails. They weren’t glamorous, but they made a difference.
But DAZN is just one story. Over the past few years, I’ve had the chance to work with hundreds of teams across the world—from fast-growing startups to Fortune 500 enterprises. No matter the industry or tech stack, the same problems kept surfacing: untracked dependencies, blurred boundaries, performance slowly degrading over time. The symptoms were different, but the root causes were always the same: distributed ownership without shared constraints.
That’s where practical guardrails come in.
And now that AI code assistants are part of the workflow, setting up and maintaining those guardrails has become dramatically easier. You still need to think critically and guide the design, but the heavy lifting can be shared. That’s what this next section is about.
Enforcing design decisions
In distributed systems, architecture tends to erode not because of big decisions—but because of small, unintentional shortcuts. A teammate imports a component from the wrong domain. Someone reaches into a shared folder without realizing the coupling they’re introducing. Over time, the architecture you designed slowly unravels.
That’s why I use : a simple but powerful tool that allows you to turn your architectural intent into automated, testable rules. In the repo, for example, I apply this to the micro-frontend to ensure it remains isolated and aligned with the overall system design.
Here's what I enforce:
The domain must not depend on or .
components must remain generic and must not depend on .
The zone must remain free of dependency cycles.
All of this is captured in a single test suite that is clear, repeatable, and versioned like the rest of your code. It’s simple to write and even simpler to maintain.
With AI code assistants, I now scaffold these rules faster. I describe the architectural zones (e.g. "catalog shouldn’t know about account") and the assistant generates the assertions using .
When a rule fails, you have options: you can block the CI/CD pipeline to prevent the change from being merged, or simply notify the owning team and track the violation. Sometimes, as architects, we knowingly make trade-offs—maybe to hit a delivery milestone—but what matters is acknowledging those deviations and circling back to restore the intended architecture. Guardrails only work if they’re respected, revisited, and part of the team’s shared accountability.
Keeping performance in check
In micro-frontends each team is independent and it focuses on their feature, but when the pieces come together, users face the cost: bloated bundles, slow load times, or duplicated code. Performance needs to be a shared responsibility, not an afterthought.
To make this actionable, I define performance budgets at the micro-frontend level. In the example repo, I use a tool called to set thresholds on build artifacts—like ensuring the main JavaScript bundle stays within a defined loading time. These checks can run during pull requests, pre-push hooks, or CI/CD workflows to create a fast feedback loop for developers.
AI code assistants make this even easier. Once you describe your budget—say, “warn if this app's main bundle takes longer than 1 second to load”—they can generate the config, integrate it into your pipeline, and even flag parts of the codebase that might benefit from lazy loading or better chunking. Most modern bundlers now support similar approaches, so whatever your setup, this kind of shift-left performance check is well within reach.
Managing dependencies
One of the trickiest parts of maintaining distributed frontends is managing dependencies—both public and internal. When you're dealing with multiple independently deployed micro-frontends, keeping packages aligned isn't just a hygiene issue. It's about performance, security, and avoiding regressions caused by uncoordinated upgrades but also consistency in certain scenarios.
In large-scale systems I’ve worked on, dependency drift has caused all kinds of friction. Updates to design systems, shared utilities, or even a critical patch in a public library often get delayed because each team has to update manually, test locally, and redeploy. The result? Slow rollouts, version mismatches, and missed performance gains.
In the repo, I demonstrate how to configure Dependabot to automate this process across internal and external packages. You can set it up to watch all files, including shared packages and internal design systems published to private registries. The key is to define a reasonable schedule and grouping strategy—so teams aren’t bombarded with noise, but are still nudged to keep things up to date.
This is where AI steps in again. Rather than digging through documentation, I describe the setup I need (“check all MFEs in the monorepo for outdated dependencies, including private packages from our design system”) and let the assistant scaffold the YAML config for Dependabot or similar tools. It reduces the effort required to set it up—and more importantly, makes maintenance sustainable over time.
Understanding the landscape: DIY vs. off-the-shelf solutions
There’s no shortage of tools and platforms aiming to simplify managing micro-frontends and distributed systems. Solutions like Bit.dev, Zephyr, Piral, and Vercel offer varying degrees of automation and governance out of the box.
These platforms can dramatically reduce manual effort and speed up delivery, especially at scale. But it’s important to remember: before you adopt any tool, you should understand the underlying challenges and controls. Building these guardrails yourself—even at a basic level—gives you deep insight into the trade-offs, lets you tailor policies to your context, and empowers you to evaluate third-party offerings critically.
The approach I shared leveraging AI, helps define and automate architecture rules, performance budgets, and dependency management. A simple yet effective framework that will be your foundation. Once it's done, you can confidently decide whether to extend it with a commercial platform or keep evolving your custom setup.
Summary
Managing distributed frontend systems is complex, but setting the right guardrails can make all the difference. Leveraging AI assistants to scaffold and maintain these guardrails accelerates adoption and frees you to focus on higher-level design and collaboration. While many commercial platforms offer baked-in governance, understanding the fundamentals empowers you to make smarter decisions tailored to your context.
If you want to dive deeper into these strategies and see more real-world examples, I invite you to check out my upcoming book "Building Micro-Frontends - 2nd edition" (now available for pre-order) and subscribe to my newsletter at buildingmicrofrontends.com.
Most importantly, did you find any other guardrail that you were able to spin up quickly with AI? Feel free to share your experiences or questions about using AI and guardrails in distributed architectures. Looking forward to hear from you!
Architect frontend at Sopra Steria, inspirator, coach and International speaker.
1moHelpful article! Thank you, Luca