Journey of an AI Enhanced Platform Engineer
When I Grow Up: The Journey of an AI Enhanced Platform Engineer
Based on the adventures of AJ Geddes at Burns & McDonnell
personal github: https://github.com/aj-geddes
Chapter 1: The Matrix Has You
It all started with a cryptic bio on GitHub: "Not like this... I work for..." Those words, inspired by The Matrix, would prove prophetic as I began my journey deeper into the rabbit hole of AI engineering.
The year was 2023, and I had just created my GitHub account. My days were filled with platform engineering work at Burns & McDonnell, but my nights and weekends became dedicated to something else entirely - a growing fascination with artificial intelligence.
I started systematically working through all the free courses on AI available from MIT and Harvard. The foundational knowledge from MIT's "Introduction to Deep Learning" and Harvard's CS50 AI opened my eyes to possibilities I'd never considered. Each evening after work, I'd immerse myself in lectures on neural networks, natural language processing, and machine learning algorithms.
These weren't just academic exercises. I began applying what I learned to small projects, experimenting with models, and pushing the boundaries of what I thought possible. The more I learned, the more I realized how AI could transform not just software development, but the entire engineering workflow.
Little did I know that two years later, this educational foundation would enable me to push the boundaries of what was possible with Claude, Docker containers, and something called Model Context Protocol servers that would change everything about how engineering was done.
Chapter 2: The MCP Results
My deep dive into MIT and Harvard's AI courses led to an unexpected breakthrough. While exploring model capabilities in my personal time, I stumbled upon the concept of Model Context Protocol (MCP) - a framework that could extend AI capabilities beyond mere conversation.
The initial results in my home experiments were transformative. By creating a structured communication layer between Claude and various system components, I could enable the AI to perform actual operations rather than just discuss them. It wasn't just answering questions anymore—it was interacting with systems.
Outside of work hours, I spent time exploring how this approach could handle complex engineering tasks like improving documentation standards and code quality in engineering workflows. The manual documentation process was mind-numbing in traditional approaches, often leading to inconsistent quality and outdated information. But with MCP, everything changed in my personal projects.
A simple prompt to Claude could now generate comprehensive documentation with Mermaid diagrams, code samples, and detailed explanations—all following exact style guidelines with consistent formatting and themes. The system wasn't just understanding code - it was extending it, improving it, and documenting it better than traditional methods ever could.
More importantly, the ability to customize actions through MCP servers meant that I could extend Claude's capabilities in targeted ways, focusing on the specific needs of engineering workflows. This wasn't a general-purpose tool anymore—it was becoming a specialized engineering partner in my personal laboratory.
As I refined these capabilities in my evening hours, I began to realize that MCP wasn't just a technical framework. It was a fundamental shift in how AI and engineering could work together. The results weren't just incremental improvements—they were revolutionary changes to engineering workflows that had remained essentially unchanged for decades.
And though these experiments remained completely separate from my daytime responsibilities at Burns & McDonnell, they represented a personal passion that would drive my open source contributions in the years to come.
Chapter 3: The Local Laboratory
As my AI journey progressed, I began creating my own personal projects locally on my machine. I developed a suite of tools that reflected my growing expertise in Model Context Protocol servers and AI-enhanced infrastructure, all saved as private code repositories on my local systems.
In June 2025, I finally decided to share some of this work with the wider community. I created the GitHub account aj-geddes and began the meticulous process of organizing my local code into public repositories for general benefit. After years of private development and experimentation, I felt some of my code and skills were public-worthy.
My flagship project, terry-form-mcp, allowed AI assistants to execute Terraform commands locally through a secure, containerized environment using HashiCorp's official Terraform Docker image. Upon its public release on GitHub, it quickly gained popularity, receiving stars from other developers who saw its potential for revolutionizing infrastructure-as-code workflows.
I followed this with python-mcp-server-template, a production-ready framework that simplified the creation of new MCP servers. The template abstracted away the boilerplate code that had previously made MCP server development tedious, allowing developers to focus on the unique functionality they wanted to implement.
For enhanced filesystem operations, I created fastfs-mcp, combining file system and git operations in a single, optimized MCP server. The benchmark tests showed remarkable improvements: 73% faster file operations and 47% reduced latency for git commands compared to standard implementations.
My useful-ai-prompts repository became a community resource, offering meticulously crafted prompts for various engineering tasks. Each prompt was documented with examples of inputs and outputs, making it easier for engineers to integrate AI assistance into their workflows.
Perhaps the most impressive demonstration of my work came during a live session on Discord for MCP developers. In front of an audience of skeptical peers, I showcased Terry-form-mcp's capabilities by having it generate an entire project from scratch. The result was terraform-module-demo-v2, a fully-functional, production-ready Terraform module for Azure Resource Groups with comprehensive validation, documentation, and development workflows.
The Discord channel exploded with reactions as the audience watched the module take shape in real-time. What would have taken days of careful coding was completed in minutes, with all best practices implemented automatically. The demo wasn't just a technical showcase—it was proof that AI-generated infrastructure code could meet enterprise standards. The terraform-module-demo-v2 repository remained in my GitHub as both documentation of that watershed moment and a template for others in the community.
Late nights and early mornings were dedicated to these projects. While my daytime hours belonged to Burns & McDonnell, my personal time was invested in advancing the state of the art in AI-enhanced engineering tools. The timestamps on my commits told a story of dedication: 2 AM pushes, 5 AM pull requests, and weekend-long coding sessions.
What started as personal experiments soon developed into a suite of interconnected tools that were gaining traction in the open source community. These projects weren't just repositories; they were practical solutions to real-world engineering problems I encountered daily.
Chapter 4: The Containerized Revolution
My evenings and weekends were consumed by experimentation with Docker containers and MCP servers. While completely separate from my daytime responsibilities at Burns & McDonnell, this personal journey was teaching me valuable lessons about containerization and secure environments.
The journey wasn't easy. I remember long days troubleshooting OpenBao configurations, battling with persistent volume permissions and Raft storage bolt file timeouts. The error messages still haunt my dreams:
OpenBao failing to start due to Raft storage bolt file timeout and permission issues
But those struggles led to breakthroughs in my personal projects. Leveraging Docker containers for MCP servers created a secure, isolated environment where Claude could interact with systems without exposing sensitive data.
It wasn't just about the answers anymore. It was about having an AI that could actually do things—query databases, inspect code, generate documentation, and even commit changes back to repositories.
The beauty of it was that everything stayed within the infrastructure. The Claude Desktop MCP architecture provided all the power of AI while maintaining strict security standards.
My open source work had evolved completely independently from my professional responsibilities. The knowledge gained from developing terry-form-mcp and fastfs-mcp remained strictly in my personal GitHub repositories, ensuring a clear separation between my employment and my personal intellectual property.
Chapter 5: From Terraform to Transformation
While my daytime work at Burns & McDonnell focused on traditional platform engineering with Terraform, Kubernetes, and cloud infrastructure, my personal projects were exploring how AI could transform these processes.
The real transformation in my personal exploration began when I connected multi-cloud Terraform infrastructure to my MCP system. Building on the foundation of my open source terry-form-mcp project, I created a compact Multi-Cloud Terraform MCP server that reduced token usage while maintaining core functionality:
tf_init, tf_plan, tf_apply, tf_destroy, state_list
The lessons from terraform-module-demo-v2 proved invaluable as I refined these capabilities in my home lab environment. What made this particularly satisfying was knowing that terraform-module-demo-v2 itself had been created by terry-form-mcp during a live Discord demonstration for MCP developers. The tool I had built was now helping me build other tools in a virtuous cycle of AI-enhanced development.
The same validation patterns and documentation workflows that had impressed my peers during that memorable Discord session continued to evolve in my open source repositories. What had begun as a demo to showcase capabilities had evolved into a template that others in the community could build upon.
In my personal projects, I imagined a world where platform engineers could deploy infrastructure across Azure, AWS, and GCP using natural language instructions. The days of manually crafting HCL for every resource would be in the past.
The tedious memorization of Terraform module structures and security requirements could be eliminated. The system could handle all of it—following best practices automatically.
It could enforce required files like .header.md, data.tf, main.tf, and more. It wouldn't just know about them - it would enforce them, like having a senior engineer checking work in real-time, except it would never get tired and never miss anything.
Though these capabilities remained in my personal projects and open source work, the vision of what was possible continued to drive my passion for AI-enhanced engineering.
Chapter 6: The AI Enhanced Platform Engineer I Became
Looking around the office in 2025, it was hard to believe how far things had come. I didn't consider myself an AI engineer in the traditional sense. I had become something more specific: an AI enhanced Platform Engineer, deliberately focusing on creating tools to augment my weaknesses while amplifying my strengths.
"I write tools to do the things I'm weakest at," I'd explain when asked about my approach. "And I make sure to keep myself in the 'Balmer Peak' of productivity." The reference to XKCD #323 usually got a laugh from fellow engineers who understood the comic's theory about optimal programming skill occurring at precisely 0.1337% blood alcohol concentration.
At the core of my philosophy was a simple truth: no matter how advanced the AI became, it always needed a skilled human at the helm. The technology couldn't work without the human judgment, creativity, and domain expertise that I brought to the table. What AI did provide was freedom from the mindless repetition that had plagued IT for decades.
"We shouldn't have to write our name 100 times anymore in IT," I'd often say. "Let the machines handle the tedium while we focus on the truly human aspects of engineering—creativity, problem-solving, and innovation."
My professional work at Burns & McDonnell remained completely separate from my personal AI engineering projects. During the day, I focused on my responsibilities as a Platform Engineer, building and maintaining enterprise infrastructure and helping teams adopt modern DevOps practices. This clear separation was important—it ensured that my personal explorations with MCP servers and AI assistants remained my own intellectual property.
As BMCD embraced enterprise AI solutions, my ability to enhance my internal workflows blossomed. Tools like Andi.burnsmcd.ai became my writing assistant and general knowledge repository, helping me to produce better documentation and streamline communications. This corporate adoption of AI complemented my personal interests without crossing boundaries.
The numbers at BMCD spoke for themselves. Since joining Burns & McDonnell in 2023, I'd contributed to hundreds of repositories within the organization, with hundreds and hundreds of pull requests merged across a diverse array of projects. These contributions were entirely focused on traditional platform engineering tasks—Terraform modules, GitHub Actions workflows, containerization strategies, and cloud infrastructure.
What made this journey even more remarkable was that all of my AI engineering work had been done before and after core working hours. The timestamps on my commits told the story: early mornings before sunrise, late evenings after everyone had gone home, and weekends dedicated to innovation. This wasn't just a job—it was a passion that consumed every available moment outside of work.
My latest project, a Language Server Protocol implementation for Playwright (playwright-lsp), reflected my commitment to improving developer experience. By bringing intelligent code completion and inline documentation to Playwright test files, I was addressing a pain point I'd experienced firsthand.
My daytime contributions at BMCD spanned the technological spectrum: modernizing GitHub Actions workflows for Cypress and Playwright testing frameworks, implementing OIDC-based Azure authentication with fallback mechanisms, enhancing Docker build pipelines, and configuring Helm deployments for Kubernetes.
The key had been understanding that AI wasn't about replacement but augmentation—giving engineers superpowers to do more with less effort. Though my personal AI projects remained separate from my BMCD work, the perspective and problem-solving approaches I gained informed my professional mindset.
"When I was a kid and people asked what I wanted to be when I grew up, I never said 'AI enhanced Platform Engineer,'" I often reflected. "But now I can't imagine doing anything else."
And what exactly does an AI enhanced Platform Engineer do? We don't just write code for machines to follow. We create systems that learn, adapt, and amplify human potential where we're weakest. We're not just programming computers—we're teaching them to be partners that complement our capabilities, always with the human engineer as the essential component in the system. After all, the best AI in the world is only as good as the human who guides it.
Thank you for taking the time to read my story, I appreciate it. I am grateful for where I am at, and the skills I have.