AI & DevOps tools in 2025! (part 1)
I’ve been absent the last couple of weeks without posting any article or topic, and the reason is simple: I was busy exploring how DevOps tools have changed with all the new AI tools in recent years. But instead of just reading documentation or sharing someone else’s blog, I did it my way, I built my own SaaS app to test everything myself, from idea to monitoring. Even if my title says Solution Architect, my blood says Engineer, and as engineer I can’t just read and post screenshots of tools in 2025. So I decided to test all of them by creating a real product, alone, in just 2 weeks. The result? Check out https://dev.aipipelines.co.uk (Alpha version). I used every AI shortcut I could, and now I’m on track to launch the production MVP in 2 more weeks.
Now let’s go to the point. I’m going to split the article in DevOps phases so I can present what tools I used in each phase:
Phase: Plan
As I wanted to start really from scratch, I started not from the plan itself, instead I started from idea creation, refining ideas, converting into requirements, pass those requirements into epics/tasks so I can track my progress.
Step 1: Ideas → Requirements
Initially I started by the winner of tools of the year ChatGPT. I know everyone will think that is not a big deal, everyone uses ChatGPT, but believe me it’s not as simple. There are too many elements I was not aware of like projects, context and memory into your account. If you speak Spanish, highly recommended this video https://www.youtube.com/watch?v=4uJAxUm-Lxw, if you don’t, don’t worry I’ll create one article to explain the key elements of it. So as I mention, it’s not just ask ChatGPT for ideas, instead you need to configure, use projects, add context and memory to it, attach examples of products, styles you want to use, so the prompting will be simpler. Once you create your requirements, put them into Atlassian Confluence. Why? Because up to 10 users is free, and because it also has Intelligence that allows you to clear and refine even better the requirements by asking Atlassian Intelligence AI to do it.
Step 2: Requirements → Planning
Once I had my ideas and requirements well-defined, I needed to convert them into a trackable plan. Of course, as an Atlassian Champion, I used Jira to plan my activities — but instead of doing it manually, I used a combination of Rovo and Intelligence.
First, I started with Rovo (the AI agent in Atlassian tools, compatible with Jira and Confluence). From either tool, I could access all my requirements and ask Rovo to create my epics. I tried a couple of times, but it didn’t work as expected. The main reason? Yes, Rovo has access to your data, but it’s not smart enough to provide structured plans.
So I asked ChatGPT to create a prompt to generate Epics/Tasks using my Confluence content as context. Then I passed that prompt to Rovo and — voilà! — my requirements were converted into a clear plan with Epics/Tasks. Finally, I used Atlassian Intelligence to refine my tasks/epics using the “Writing Assistant” button, which added proper structure to my tickets.
Step 3: Add Designs
I know this is not technically part of planning, but it comes right after and before coding. What I needed here were two things: first, a flow for my application defining the functionality. I asked ChatGPT again and reviewed some options. The one I liked for its simplicity was Flowstep, a new AI tool that generates basic screens from a prompt — but with a powerful process flow. It really helped define how my app would work.
Next, I needed to convert my ideas into UI designs. Initially, I wanted to use Figma and do it manually, but I explored their AI capabilities and discovered Figma Maker. It allows you to prompt and generate frontend mockups with well-defined structure and styles. I tried prompting manually and it didn’t work well. Then I passed the prompt to ChatGPT to refine, adding more details and full project context. With that improved prompt, Figma Maker generated a truly usable frontend with great UI/UX.
Phases: Code, Build and Test
(I’m grouping these three because the tools are closely related.)
Step 4: Architect
Before I started coding, I needed to define clearly how many repos I needed to use, which set of tools my competitors were using for the same problem I’m trying to solve, and also scope down the tools based on how far I wanted to deploy. This one was a quick win. Inside my ChatGPT project, I created another chat (yes, you can create separate chats into a project to keep the thread of conversation), and in that one I added the context related to the architecture. I managed to identify the programming language (Node.js + React in my case), libraries, and the full cloud resources architecture. In my case, a combination of AWS resources like ALB, EC2, Fargate, SQS, Bedrock, Lambda, S3, Route53, etc., all following some security and scalability principles. I also included tools like Docker, and Docker Compose for dev environments and local testing.
Step 5: Coding
The fun part starts. Here the winer is Cursor, and Bitbucket for the repository. I started the frontend mockup in React. Initially, I was prompting Cursor manually using my Figma designs to create a usable frontend, but I noticed that was taking too much time. I tried to speed up the process and realised that Figma Maker allows you to export your fancy mockup into code. So I did it. I exported the UI example into my repo in a separate sample directory and asked Cursor to read it and use it as a reference for all my styles, and then asked it to convert that code into a functional frontend using my stack. And it worked. In a few hours, all the code generated from Figma was converted into the programming stack I was using. Of course, I asked ChatGPT to suggest code structure best practices so I could pass that info to Cursor. Without guidance, Cursor creates an unstructured but working app — it really needs direction. But once you guide it, it works like magic. I managed to create a functional frontend (with dummy data) and had it running locally in less than one day.
For the backend, the replacement of Figma Maker was ChatGPT. Using my project in a separate backend chat, I created all the requirements, features, and endpoints needed for the type of app I wanted. Then I passed all that to Cursor, and in another couple of days, I had my backend running locally. And the DB integration? Easy. I’m familiar with Liquibase, but this time I asked the system for a simpler and more integrated solution for Node.js — and I discovered Prisma. Again, by passing the prompt to Cursor, it created a clear database structure mapped to my backend endpoints.
Step 6: Building
Previously I was running everything locally, but now I needed to dockerise to make it portable. So I used Cursor again to create all the Dockerfiles, and Docker Compose to deploy my Docker images locally, so I could test the dockerisation before deploying to some cloud environments. This step was quick, Cursor helped not just to create the build processes, but also generated shell scripts and YAML pipelines so I could include all the build steps in my basic pipeline process.
Step 7: Testing
Once again the champion here was Cursor. I used Cursor to create all my test scenarios and also the frontend testing using Cypress. One of the unexpected parts was when I was running everything with Docker Compose and had small issues with the integration between frontend, backend, and database. Cursor was really helpful in troubleshooting, sometimes even too much, because it was checking all possible paths and overthinking the issue. To improve the troubleshooting, I used a combination of ChatGPT and Cursor. By giving the context to ChatGPT, it found many problems that Cursor didn’t understand. And the best part was that, with that guidance from GPT, I could pass exactly what to test into Cursor. Once I did that, it worked really well. I also included those test scenarios in script mode or as part of my testing code for future runs.
Release & Deploy
As I previous mentioned I used AWS as platform for my application but here the star was Terraform
Step 8: Release & Deploy
For release control, I used Docker Hub to store the images. The deployment was automated using Terraform Cloud for the foundational architecture and Terraform on-premise with S3 backend for the application setup. The full setup was built using Cursor and ChatGPT working together. First, I asked ChatGPT to help me write a clear architecture prompt describing what I needed, then passed that to Cursor so it could generate all the Terraform scripts. Cursor even helped update my test cases to include infrastructure testing, which was unexpected but very helpful.
One key AWS service here is SQS — basically a message queue. My backend sends jobs there, and from that queue, they get picked up and processed by Lambda, then returned to the backend. This works really well because it’s cost-effective. Imagine you have an app where each user has a picture gallery and you need to process all those images. You could scale up the backend or create separate microservices, but both solutions get expensive and hard to manage, especially under peak loads. With SQS + Lambda, each job is processed in parallel without needing permanent infrastructure.
And here’s the best part: AWS gives you 1 million free Lambda requests per month, each with up to 4ms compute time. After the free tier, every extra million requests costs just 0.2 cents. That’s insanely cheap compute power. On top of that, I discovered a bunch of AI services in AWS I didn’t know about. You can integrate them directly — like I did to include a chat system in dev.aipipelines.co.uk, or even for image generation. Just pick the model you want, connect it, and it works.
Conclusion
This first part — from idea to deployment — was not about theory. It was about testing the DevOps tools of 2025 by actually using them to build something real. Every tool I mentioned, from Cursor to Rovo, from Figma Maker to AWS Lambda, played a role in building my MVP. Some things worked well, others needed workarounds, and a few were surprisingly powerful when combined. But that’s the point — DevOps today isn’t just about which tools you use, it’s about how you connect them, automate them, and adapt them to build faster (without losing control).
Next week, I’ll continue with the second part: Operations & Monitoring in Practice — where I’ll show what I used to run, observe, and maintain the system once it’s live.
Web Development Project Manager I Atlassian Community Leader Ljubljana
5dThis is amazing, and kudos for trying it yourself from start to finish instead of reading about it! I’ve tried the first part Chat GPT - Confluence - Figma Make so far and it is really impressive.
Make Transformation Produce Results | Operations Excellence | Agentic AI & RPA | Organisation Redesign | International Consultant | Enterprise Coach | Published Author
5dThanks for sharing, Elvis I really appreciated your thoroughness and research and your conclusion regarding the power of combining the AI tools.