What happens when developers start using tools like Cursor more frequently? At DX, we can correlate AI usage with any SDLC metric. You can instantly see if heavy Cursor users are able to spend more time on feature development, or if increased usage correlates with longer cycle times and harder-to-understand code.
How Cursor usage affects SDLC metrics at DX
More Relevant Posts
-
Most development teams are experimenting with AI tools, but few have cracked the code on systematic integration across their entire software delivery lifecycle. The result? Fragmented adoption that creates more friction than flow. Join Aditi Agarwal on September 18 for a deep dive into building an AI-augmented SDLC that actually works. You'll discover practical patterns for maximizing developer effectiveness, learn where AI creates genuine value versus hidden friction and walk away with actionable strategies for sustainable engineering advantage. This isn't theory - it's hands-on guidance from real software delivery experiences. Limited seats! Sign up now: https://ter.li/i0mtu5
To view or add a comment, sign in
-
-
From zero to my first Engineering Productivity Metrics Analyser with Claude. What started as a simple question about engineering productivity metrics analysis turned into a fully automated solution in under two hours. My Claude Code experience: - Started with a plain English description of what I wanted - Claude built a complete analyser, RAG status classification, and automated reporting - When I mentioned JavaScript would be easier to run - instant conversion, no hassle - When I asked for parameterization for future data sources - boom, CLI arguments, auto-detection, batch processing What impressed me most: - Zero setup friction - went from idea to working solution instantly, otherwise it will take months of development time - Iterative improvement felt natural - like pair programming with an experienced engineer - Well- designed code, help documentation, and build in flexibility The Result: An analyser that automatically: - Classifies RAG levels - Detects trends and patterns - Handles multiple data sources seamlessly 💡 This is what the future of development looks like - focusing on WHAT you want to build rather than HOW to build it. #ClaudeCode #AI #VibeCoding #EngineeringProductivity #DeveloperExperience #Innovation
To view or add a comment, sign in
-
-
The most successful developers I work with are moving beyond using AI as a tool to thinking AI-first in their development process: Traditional Workflow: Plan → Code → Test → Deploy AI-Native Workflow: Context → Question → Visualize → Generate → Validate → Iterate This shift represents a fundamental change in how we approach software development. My comprehensive Claude Code guide shows you 12 workflow patterns that turn complex projects into simple checklists. https://lnkd.in/eF4_4_Tt
To view or add a comment, sign in
-
-
Most AI in dev today = code assistants. But coding is only 30-40% of SDLC. What about the 60-70%? Requirements and benchmark drifting, compliance gaps, mounting debt, unpredictable delivery. That’s where AI has to step in, governing how it’s built, shipped, and sustained.
To view or add a comment, sign in
-
Day71 – MLflow Model Versioning Today’s session focused on model versioning with MLflow, an essential part of MLOps that ensures safe, reproducible, and trackable model management. We explored two approaches to versioning: manual (UI-based) and automatic (code-based), and saw how MLflow’s Model Registry helps in managing multiple versions efficiently. Key Concepts Why Model Versioning? • Ensures reproducibility & lineage (track which run produced which model) • Enables safe rollbacks if a new version fails • Supports approval workflows (None → Staging → Production) • Acts as a central catalog for collaboration • Prepares models for CI/CD automation Two Approaches in MLflow: 1. Manual Versioning (UI-based) • Log model in a run • Use MLflow UI → Register model → Becomes Version 1, 2, … 2. Automatic Versioning (Code-based) • Pass registered_model_name while logging model • MLflow auto-creates new versions for each run • Optionally promote models to Staging/Production programmatically Practical Steps Covered Manual Versioning • Installed dependencies (mlflow, scikit-learn, pandas) • Started MLflow Tracking UI (mlflow ui) • Prepared dataset with make_classification • Trained Logistic Regression model • Logged parameters, metrics & model in MLflow • Registered manually via UI: Experiments → Run → Artifacts → Register Model Automatic Versioning • Used registered_model_name="MyAutoRegisteredModel" in logging • MLflow auto-registered new versions (v1, v2, v3, …) • Checked versions in MLflow UI under Model Registry • Used MlflowClient() to programmatically: 1. List versions 2. Promote latest version to Staging or Production Summary • Manual Versioning: UI-based, requires clicks for registering versions • Automatic Versioning: Code-based, each run creates a new version automatically • MLflow Model Registry provides version history, staging, production workflows • Critical for collaboration, governance, and CI/CD in MLOps 📓 Notebook & Code: https://lnkd.in/eH-wa8RN 📂 GitHub Repo: https://lnkd.in/e6EWcv-D #Day71 #MLOps #MLflow #ModelVersioning #MachineLearning #ExperimentTracking #ModelDeployment #CICD #DataEngineering #AI #GenAI #100DaysOfCode KODI PRAKASH SENAPATI #Thanks
To view or add a comment, sign in
-
💡 What if your AI assistant could access your test cases without leaving your IDE? Developers lose focus when they constantly switch between coding and test management tools. The 𝗤𝗔 𝗦𝗽𝗵𝗲𝗿𝗲 𝗠𝗖𝗣 𝗦𝗲𝗿𝘃𝗲𝗿 solves this by connecting your AI-powered IDE directly to your QA Sphere test libraries. ✔ Look up test cases by feature or tag ✔ Spot coverage gaps in real time ✔ Generate bug reports with full context ✔ Get AI-driven automation priorities Stop breaking your workflow — bring QA intelligence right into your development environment. 🎥 Watch how it works: https://lnkd.in/edQChmQC
To view or add a comment, sign in
-
-
Every developer has faced the late-night debugging dread, spending hours unraveling spaghetti logic that poor planning could’ve prevented. In our latest blog, we share a practical, brain-friendly framework to save you from those sleepless, panic-fueled sessions. Here’s a snapshot of the principle behind it: ➔ 15 minutes of smart planning = 5 hours saved from debugging ➔ Research shows our brains process visuals 60,000x faster than text. That’s why mind maps, flow diagrams, and quick sketches shift you from reactive coding to proactive designing. ➔ Visual reasoning = senior-level problem solving Visual diagrams help you spot system-wide dependencies, corner cases, and user pain points before you write a single line of code. ➔ The 15-Minute Framework – Designed to align with how developers think: ➔ Sketch the feature flow or states (loading, success, error) ➔ Ask “What could go wrong?” ➔ Plan minimal safeguards or recovery flows ➔ Code with clarity and fewer surprises Want to see this planning framework in action? Read the full blog here: https://lnkd.in/gZsr8BMk At Wow Labz, effective planning isn’t just a nice-to-have. It’s the foundation for building resilient AI agents, robust feature flows, and scalable digital products. If you're ready to build better, smarter systems—let’s connect: https://lnkd.in/gY37rtBW We’re ready to help you plan, build, and ship AI-powered workflows today. #DeveloperProductivity #PlanningFramework #MindMapping #AIWorkflow #WowLabz
To view or add a comment, sign in
-
-
From Code to Programs, From Typing to Specs Traditional dev was about grinding through code. AI-first dev flips the script: agents write, will write most of the program, while humans own the specs, rules, and architecture. In 2024–2025, specs aren’t optional docs anymore — they’re the constitution for AI-built systems. Here is my practical guide after having success in experiments.
To view or add a comment, sign in
-
Over the past few weeks, I’ve had the opportunity to dive deep into automation challenges while working on the ICF tool. One of the biggest learnings for me has been around using ancestor locators effectively — especially in situations where multiple similar elements exist on a page. Finding the right way to handle them was not just a technical task but a puzzle that required patience and creativity. Another interesting challenge was handling dropdowns: 1) Extracting values, 2) Storing them in lists, and 3) Reusing them dynamically. This felt like moving from basic automation to really playing with the data and understanding how flexible UI automation can become. But what stood out most was the sheer scale — writing 7,000+ lines of automation code for an end-to-end flow. It wasn’t just about typing code; it was about: 1) Managing complexity, 2) Dealing with repetitive locators, 3) Troubleshooting when multiple identical elements confused the scripts, 4) And still keeping everything maintainable and reusable. Along the way, I also leaned on tools like GitHub Copilot and Perplexity AI — especially when dealing with tricky JavaScript-heavy UI components that Robot Framework alone couldn’t handle. Having that extra support sped up experimentation and helped me explore solutions I might have missed. Yes, it was a hassle at times — but every blocker turned into a learning opportunity. And looking back, the journey from struggling with locators to building reliable, reusable automation feels like a huge step forward. Key takeaway: Automation is less about writing code and more about solving problems with logic, persistence, and adaptability. #AutomationTesting #RobotFramework #LearningByDoing #Copilot #Perplexity
To view or add a comment, sign in
-
Following Sam Julien’s talk at LambdaTest’s TestMu conference on the shift from SDLC to Agent Development Lifecycle (ADLC), I want to continue the conversation about how agentic AI changes how we build software. Traditional software development assumes you can predict behavior upfront: gather static requirements, define a process, ship, and test. But agents operate toward goals, and their behavior can’t always be fully predicted ahead of time. That’s why ADLC matters. It reframes how we build: 1️⃣ 𝐎𝐧 𝐭𝐨𝐩 𝐨𝐟 𝐬𝐭𝐚𝐭𝐢𝐜 𝐫𝐞𝐪𝐮𝐢𝐫𝐞𝐦𝐞𝐧𝐭𝐬 → 𝐟𝐨𝐜𝐮𝐬 𝐨𝐧 𝐨𝐮𝐭𝐜𝐨𝐦𝐞𝐬, the business goals the agent must achieve 2️⃣ 𝐎𝐧 𝐭𝐨𝐩 𝐨𝐟 𝐩𝐫𝐨𝐜𝐞𝐬𝐬 𝐝𝐞𝐬𝐢𝐠𝐧 → 𝐝𝐞𝐟𝐢𝐧𝐞 𝐛𝐞𝐡𝐚𝐯𝐢𝐨𝐫𝐬, how the agent should think, act, and adapt 3️⃣ 𝐎𝐧 𝐭𝐨𝐩 𝐨𝐟 𝐨𝐧𝐞-𝐨𝐟𝐟 𝐜𝐮𝐬𝐭𝐨𝐦 𝐛𝐮𝐢𝐥𝐝𝐬 → scale with 𝐫𝐞𝐮𝐬𝐚𝐛𝐥𝐞 𝐩𝐚𝐭𝐭𝐞𝐫𝐧𝐬 and components 4️⃣ 𝐎𝐧 𝐭𝐨𝐩 𝐨𝐟 𝐐𝐀 𝐭𝐞𝐬𝐭𝐢𝐧𝐠 → 𝐫𝐮𝐧 𝐚𝐠𝐞𝐧𝐭 𝐞𝐯𝐚𝐥𝐮𝐚𝐭𝐢𝐨𝐧𝐬, with behavioral audit trails to inspect actions and outcomes 5️⃣ 𝐎𝐧 𝐭𝐨𝐩 𝐨𝐟 𝐝𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭 → 𝐝𝐞𝐬𝐢𝐠𝐧 𝐟𝐨𝐫 𝐫𝐚𝐩𝐢𝐝 𝐢𝐭𝐞𝐫𝐚𝐭𝐢𝐨𝐧, knowing prompts, logic, and data will need constant refinement 6️⃣ 𝐀𝐧𝐝 𝐛𝐞𝐲𝐨𝐧𝐝 𝐦𝐚𝐢𝐧𝐭𝐞𝐧𝐚𝐧𝐜𝐞 → 𝐚𝐝𝐝 𝐬𝐮𝐩𝐞𝐫𝐯𝐢𝐬𝐢𝐨𝐧, ensuring quality, safety, and compliance at scale As the WRITER team put it, “We’re not just versioning code anymore. We’re versioning intelligence, behavior, and decision-making.” I think this idea will continue to evolve as AI adoption increases, but it raises an important question for all of us: how do we collectively rethink development to make agentic systems reliable, safe, and scalable? Read more in our blog: https://lnkd.in/g5ciYpwA or catch Sam’s upcoming talk on ADLC at The AI Conference!
To view or add a comment, sign in
-
Greyson Junggren if you're showing usage that would mean the user is active or the agent is active? I'm trying to understand whether the user needs to be actively engaging with Cursor Agent.