How can you measure AI's impact on your organization?
To understand AI's impact on your organization -- and to improve it -- you need two things:
A solid understanding of your overall productivity and performance (DX Core 4)
Specific measurements in order to understand the precise impact of AI tools, so you can adjust your approach when necessary (brand new: the AI Measurement Framework)
Despite AI's impact on how developers work, the core objectives remain the same: delivering working software that solves real problems. Robust, multi-dimensional approaches to measuring productivity and performance (as highlighted in the SPACE framework, and made operationally easy with the DX Core 4) are still the right way to think about performance. Arguably, these measures are more important than ever before, specifically to help us avoid exchanging short-term gains in speed for longer-term losses in quality and maintainability.
To make better decisions about the impact of AI tools, you do need some specific measurements to track performance of those tools across three dimensions:
Utilization: are developers and teams adopting these tools into their workflows?
Impact: are these tools causing meaningful positive change to how fast we can ship?
Cost: do the financials make sense?
This week, DX announced the AI Measurement Framework, a research-backed framework to provide clear guidance on what to measure.
The DX AI Measurement Framework includes AI-specific metrics to enable organizations to track AI adoption, measure impact, and make smarter investments—all while continuing to roll out and experiment with AI tools at a rapid pace.
When combined with the DX Core 4, which measures overall engineering productivity, leaders gain deep insight into how AI is providing value to their developers, and what impact AI is having on organizational performance.
Read the full whitepaper here: https://getdx.com/research/measuring-ai-code-assistants-and-agents/
no bullsh*t security for developers // partnering with universities to bring hands-on secure coding to students through Aikido for Students
2moThis is such an important question, Laura, especially because measuring AI’s impact isn’t just about developer velocity or immediate productivity gains, but also about the invisible costs and risks we introduce. When we integrate AI deeply into engineering workflows, we’re often looking at metrics like PR throughput, cycle time, or cognitive load reduction. But at the same time, AI tooling introduces new dimensions: the attack surface grows (e.g., model supply chain vulnerabilities, prompt injection, shadow dependencies), and so does the risk of unintentional data leakage or flawed code patterns being amplified at scale. The challenge is that traditional metrics rarely capture this evolving risk profile. We need a more holistic approach that tracks not only acceleration but also resilience: 1/ How does AI impact overall security posture over time? 2/ Are we inadvertently creating new classes of vulnerabilities faster than we’re resolving technical debt? 3/ How well can our tooling detect and remediate AI-driven or AI-amplified vulnerabilities before they reach production? Curious to hear: Have you seen organizations start to define these “resilience metrics” alongside the usual productivity ones?