Following Sam Julien’s talk at LambdaTest’s TestMu conference on the shift from SDLC to Agent Development Lifecycle (ADLC), I want to continue the conversation about how agentic AI changes how we build software. Traditional software development assumes you can predict behavior upfront: gather static requirements, define a process, ship, and test. But agents operate toward goals, and their behavior can’t always be fully predicted ahead of time. That’s why ADLC matters. It reframes how we build: 1️⃣ 𝐎𝐧 𝐭𝐨𝐩 𝐨𝐟 𝐬𝐭𝐚𝐭𝐢𝐜 𝐫𝐞𝐪𝐮𝐢𝐫𝐞𝐦𝐞𝐧𝐭𝐬 → 𝐟𝐨𝐜𝐮𝐬 𝐨𝐧 𝐨𝐮𝐭𝐜𝐨𝐦𝐞𝐬, the business goals the agent must achieve 2️⃣ 𝐎𝐧 𝐭𝐨𝐩 𝐨𝐟 𝐩𝐫𝐨𝐜𝐞𝐬𝐬 𝐝𝐞𝐬𝐢𝐠𝐧 → 𝐝𝐞𝐟𝐢𝐧𝐞 𝐛𝐞𝐡𝐚𝐯𝐢𝐨𝐫𝐬, how the agent should think, act, and adapt 3️⃣ 𝐎𝐧 𝐭𝐨𝐩 𝐨𝐟 𝐨𝐧𝐞-𝐨𝐟𝐟 𝐜𝐮𝐬𝐭𝐨𝐦 𝐛𝐮𝐢𝐥𝐝𝐬 → scale with 𝐫𝐞𝐮𝐬𝐚𝐛𝐥𝐞 𝐩𝐚𝐭𝐭𝐞𝐫𝐧𝐬 and components 4️⃣ 𝐎𝐧 𝐭𝐨𝐩 𝐨𝐟 𝐐𝐀 𝐭𝐞𝐬𝐭𝐢𝐧𝐠 → 𝐫𝐮𝐧 𝐚𝐠𝐞𝐧𝐭 𝐞𝐯𝐚𝐥𝐮𝐚𝐭𝐢𝐨𝐧𝐬, with behavioral audit trails to inspect actions and outcomes 5️⃣ 𝐎𝐧 𝐭𝐨𝐩 𝐨𝐟 𝐝𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭 → 𝐝𝐞𝐬𝐢𝐠𝐧 𝐟𝐨𝐫 𝐫𝐚𝐩𝐢𝐝 𝐢𝐭𝐞𝐫𝐚𝐭𝐢𝐨𝐧, knowing prompts, logic, and data will need constant refinement 6️⃣ 𝐀𝐧𝐝 𝐛𝐞𝐲𝐨𝐧𝐝 𝐦𝐚𝐢𝐧𝐭𝐞𝐧𝐚𝐧𝐜𝐞 → 𝐚𝐝𝐝 𝐬𝐮𝐩𝐞𝐫𝐯𝐢𝐬𝐢𝐨𝐧, ensuring quality, safety, and compliance at scale As the WRITER team put it, “We’re not just versioning code anymore. We’re versioning intelligence, behavior, and decision-making.” I think this idea will continue to evolve as AI adoption increases, but it raises an important question for all of us: how do we collectively rethink development to make agentic systems reliable, safe, and scalable? Read more in our blog: https://lnkd.in/g5ciYpwA or catch Sam’s upcoming talk on ADLC at The AI Conference!
How ADLC Changes Software Development with Agentic AI
More Relevant Posts
-
Most development teams are experimenting with AI tools, but few have cracked the code on systematic integration across their entire software delivery lifecycle. The result? Fragmented adoption that creates more friction than flow. Join Aditi Agarwal on September 18 for a deep dive into building an AI-augmented SDLC that actually works. You'll discover practical patterns for maximizing developer effectiveness, learn where AI creates genuine value versus hidden friction and walk away with actionable strategies for sustainable engineering advantage. This isn't theory - it's hands-on guidance from real software delivery experiences. Limited seats! Sign up now: https://ter.li/i0mtu5
To view or add a comment, sign in
-
-
What happens when developers start using tools like Cursor more frequently? At DX, we can correlate AI usage with any SDLC metric. You can instantly see if heavy Cursor users are able to spend more time on feature development, or if increased usage correlates with longer cycle times and harder-to-understand code.
To view or add a comment, sign in
-
-
Getting specifications right has always been critical for building good software. The research equivalent is asking the right questions. Too often, though, engineers and companies, eager to ship fast under the banner of “rapid iteration”, skip over requirements and specifications. Great for demos, not so great for business. Enter #SpecKit, an open-source toolkit from GitHub. Think of it as specification-driven development (SDD), similar to test-driven development. It’s easy to set up, with a few commands that guide you through the deliberate process: clarify specs, create a plan, and then implement. Why do I think this matters? Historically, developers got rapid feedback on syntax (will it compile?) and semantics (do the tests pass?). Test-driven development bridged specs and implementation, where we literally wrote tests before code. Automation and now GPT-based tooling have accelerated this. But there was always a gap: natural-language specs vs. machine-readable code. We never encountered the equivalent of a “your specs didn’t compile” error, nor did we have strong guarantees that the specs accurately describe the code and that the code implements the specs. Aligning specs has remained a slow, high-friction, collaborative process (shout out to Oren Toledano and the folks at Swimm for their innovative work advancing tooling on that front). With tools like SpecKit, we’re moving toward a future where specifications become the dynamic artifact of record. Per the blog post from GitHub, “AI makes specifications executable.” Code is becoming the commodity piece; intent, captured in specs, is the source of truth. Worth remembering that AI can write code well when the specs are good, but it can’t read your mind. You wouldn’t tell a teammate “just build something transformative” and expect success. The same holds for AI. So whether you’re prototyping or scaling infrastructure, start with the specs. As agents and automation multiply our leverage, the cycles that matter most are spec-first + verify. If something breaks, check your specs first.
To view or add a comment, sign in
-
Day71 – MLflow Model Versioning Today’s session focused on model versioning with MLflow, an essential part of MLOps that ensures safe, reproducible, and trackable model management. We explored two approaches to versioning: manual (UI-based) and automatic (code-based), and saw how MLflow’s Model Registry helps in managing multiple versions efficiently. Key Concepts Why Model Versioning? • Ensures reproducibility & lineage (track which run produced which model) • Enables safe rollbacks if a new version fails • Supports approval workflows (None → Staging → Production) • Acts as a central catalog for collaboration • Prepares models for CI/CD automation Two Approaches in MLflow: 1. Manual Versioning (UI-based) • Log model in a run • Use MLflow UI → Register model → Becomes Version 1, 2, … 2. Automatic Versioning (Code-based) • Pass registered_model_name while logging model • MLflow auto-creates new versions for each run • Optionally promote models to Staging/Production programmatically Practical Steps Covered Manual Versioning • Installed dependencies (mlflow, scikit-learn, pandas) • Started MLflow Tracking UI (mlflow ui) • Prepared dataset with make_classification • Trained Logistic Regression model • Logged parameters, metrics & model in MLflow • Registered manually via UI: Experiments → Run → Artifacts → Register Model Automatic Versioning • Used registered_model_name="MyAutoRegisteredModel" in logging • MLflow auto-registered new versions (v1, v2, v3, …) • Checked versions in MLflow UI under Model Registry • Used MlflowClient() to programmatically: 1. List versions 2. Promote latest version to Staging or Production Summary • Manual Versioning: UI-based, requires clicks for registering versions • Automatic Versioning: Code-based, each run creates a new version automatically • MLflow Model Registry provides version history, staging, production workflows • Critical for collaboration, governance, and CI/CD in MLOps 📓 Notebook & Code: https://lnkd.in/eH-wa8RN 📂 GitHub Repo: https://lnkd.in/e6EWcv-D #Day71 #MLOps #MLflow #ModelVersioning #MachineLearning #ExperimentTracking #ModelDeployment #CICD #DataEngineering #AI #GenAI #100DaysOfCode KODI PRAKASH SENAPATI #Thanks
To view or add a comment, sign in
-
I just put together a project packaging tool. I handled the problem-solving and used AI to speed up some of the coding. The result is a simple solution that lets me package projects with a single click, whether for testing or shipping a build. Possible future additions: -Add default engine version -Add engine root directory for dynamic engine selection -Add the used engine to the log info -Load default build config from a .config/.txt file so you can set custom default build settings Feel free to check it out: https://lnkd.in/dwdjtuf7
To view or add a comment, sign in
-
🚀 MCPs are blowing my mind! The more I dive into Model Context Protocol (MCP), the more I realize how transformative it can be for the way AI interacts with tools. The learning curve has been steep, but every step is eye-opening. At its core, MCP is about creating a common language so that AI models can seamlessly interact with developer tools, automation frameworks and collaboration platforms. Here’s how it breaks down: ⚡ Host → This is where your AI model is running. Examples: VS Code, Claude Desktop, Cursor. It’s the environment where you type or talk. ⚡ Client → The bridge in the middle. It speaks both languages including the host’s natural-language world and the server’s structured protocol world. The client is responsible for converting host requests into MCP JSON (a standardized request/response schema) and routing them correctly. ⚡ Server → These are the tools that expose capabilities in a structured way. Examples: Playwright MCP, Selenium MCP, Jira MCP, Confluence MCP, GitHub MCP, etc. Each server defines what it can do (through capabilities/endpoints), and responds in a predictable format. 🔄 How it flows: 1. You type something in your host (e.g., “Run a Playwright test on login flow”). 2. The client translates this into a structured MCP JSON request. 3. The server (Playwright MCP) executes the request and returns results in JSON. 4. The client translates the results back into something your host (and you) can understand. And not to forget, multiple MCP servers are orchestrated together by your AI Agent. For example, an AI agent could use Playwright MCP to run tests, log the results in Jira MCP, and then document them in Confluence MCP. And I am accidentally trying to build a client with a host! 🤯 Bruh..!! #AI #MCP #Automation #Playwright #Selenium #Innovation #DeveloperTools
To view or add a comment, sign in
-
Been involved myself in the MCP world for quite some time now. In the post below, Manish has mentioned simple and to-the-point concepts. I often get to ask about this: Is our data safe with MCP? MCP uses two transport standard mechanisms for client-server communications. One is Stdio/local and another is remote/http. In local, the server will be installed in your local machine which includes the Playwright MCP, SQL Server MCP etc.., and the interactions will happen in your machine unlike in remote where we need to our client will communicate externally which might be secure but you need to be aware of auth behind it and its similar to how we are hitting apis and getting the response. More to dig and more to add.
QA Leader | Driving Scalable, Intelligent Testing Solutions | Passionate About QA Accelerators & AI Tools
🚀 MCPs are blowing my mind! The more I dive into Model Context Protocol (MCP), the more I realize how transformative it can be for the way AI interacts with tools. The learning curve has been steep, but every step is eye-opening. At its core, MCP is about creating a common language so that AI models can seamlessly interact with developer tools, automation frameworks and collaboration platforms. Here’s how it breaks down: ⚡ Host → This is where your AI model is running. Examples: VS Code, Claude Desktop, Cursor. It’s the environment where you type or talk. ⚡ Client → The bridge in the middle. It speaks both languages including the host’s natural-language world and the server’s structured protocol world. The client is responsible for converting host requests into MCP JSON (a standardized request/response schema) and routing them correctly. ⚡ Server → These are the tools that expose capabilities in a structured way. Examples: Playwright MCP, Selenium MCP, Jira MCP, Confluence MCP, GitHub MCP, etc. Each server defines what it can do (through capabilities/endpoints), and responds in a predictable format. 🔄 How it flows: 1. You type something in your host (e.g., “Run a Playwright test on login flow”). 2. The client translates this into a structured MCP JSON request. 3. The server (Playwright MCP) executes the request and returns results in JSON. 4. The client translates the results back into something your host (and you) can understand. And not to forget, multiple MCP servers are orchestrated together by your AI Agent. For example, an AI agent could use Playwright MCP to run tests, log the results in Jira MCP, and then document them in Confluence MCP. And I am accidentally trying to build a client with a host! 🤯 Bruh..!! #AI #MCP #Automation #Playwright #Selenium #Innovation #DeveloperTools
To view or add a comment, sign in
-
What makes testing generative AI-driven software systems so tricky to test? Get the rundown in our blog. Discover how to manage and scale the complexities >> #genai #automatedtesting
To view or add a comment, sign in
-
What makes testing generative AI-driven software systems so tricky to test? Get the rundown in our blog. Discover how to manage and scale the complexities >> #genai #automatedtesting
To view or add a comment, sign in
Well explained, Ashley Weaver! Loved hosting you at the #TestMuConf.