Laura Tacho put in an enormous amount of work to bring this piece to life covering the AI Impact metrics used across a wide variety of companies, from Tech to Finance, and it's featured in the Pragmatic Engineer.
Her piece covers:
-The real metrics companies like Google, GitHub, and Microsoft use to measure AI impact
-How they use the metrics to figure out what’s working and what’s not
-How you can measure AI impact
-AI-impact metrics definitions and measurement guide
This week in The Pragmatic Engineer, I'm sharing the real metrics that 18 companies use to measure AI impact.
Thanks to all of these companies for letting me share their approaches, which gives us all a deeper look into AI adoption and impact in the real world (not just the headlines).
Read the full article here: https://lnkd.in/dX2ivkgw
This week in The Pragmatic Engineer, I'm sharing the real metrics that 18 companies use to measure AI impact.
Thanks to all of these companies for letting me share their approaches, which gives us all a deeper look into AI adoption and impact in the real world (not just the headlines).
Read the full article here: https://lnkd.in/dX2ivkgw
How do companies measure whether or not AI Tools are worth it, and the impact in their teams? Look at the latest writing from Laura Tacho, with notes from companies like Google, Dropbox, Microsoft, Monzo, Atlassian, Adyen, Booking.com.
Grab a cup of coffee/tea, enjoy the reading.
This week in The Pragmatic Engineer, I'm sharing the real metrics that 18 companies use to measure AI impact.
Thanks to all of these companies for letting me share their approaches, which gives us all a deeper look into AI adoption and impact in the real world (not just the headlines).
Read the full article here: https://lnkd.in/dX2ivkgw
Grateful to Laura Tacho and The Pragmatic Engineer for pulling together such a sharp overview of how tech companies are starting to measure the impact of AI. 🙏 It’s a must-read: Practical, grounded, and exactly the kind of clarity our industry needs.
One thought I’d add: metrics often reflect where the work is happening today. Most are still anchored at the code layer: PRs merged, test cycles, deployment frequency. And rightly so: that’s where AI is clearly making things faster.
But history shows engineering always climbs abstraction layers:
binary → assembly → languages → frameworks → cloud → AI-augmented intent .
With each step, we gain leverage and accumulate entropy. Abstractions are power multipliers, but they also hide complexity and fragility.
So perhaps in addition to asking “how much faster did AI help us ship?”, we should also be asking:
At what level of abstraction are we now working?
How much entropy has the system absorbed at that level?
Are we converting abstraction into resilience, or into debt?
Laura’s call to track impact over time is spot on. And I'd extend it: the real signal may lie in longitudinal abstraction and entropy metrics - revealing whether our trajectory is sustainable.
Because the long-term test of AI in engineering won’t just be about the code generated this week - it will be whether, years from now, those abstractions leave us with systems we can still trust, evolve, and build upon.
How on earth do we find the signals when what we are measuring is changing so fast? Perhaps the answer is to anchor on what endures: reliability, resilience, trust, cognitive load, and adaptability. But are these the right anchors for the age of AI-augmented engineering?
#SoftwareEngineering#DeveloperProductivity#AIinEngineering#DevEx#EngineeringLeadership, DX, The Pragmatic Engineer
This week in The Pragmatic Engineer, I'm sharing the real metrics that 18 companies use to measure AI impact.
Thanks to all of these companies for letting me share their approaches, which gives us all a deeper look into AI adoption and impact in the real world (not just the headlines).
Read the full article here: https://lnkd.in/dX2ivkgw
Thought I would share my favorite takeaways from Laura Tacho's recent post:
- 60% of engineering leaders say they lack clear metrics from their AI tools (quality, ROI, or developer experience).
- Companies already tracking metrics (DORA, SPACE, DX Core 4) have an easier time adding AI-specific ones.
- AI should be measured against the same fundamental metrics.
👉 The gap isn’t adoption. It’s measurement.
Teams can only scale AI responsibly if they know what’s working, what isn’t, and how it impacts developers themselves.
For leaders, this is a must-read: how 18 companies are measuring AI impact in practice, and what we're learning from it.
This week in The Pragmatic Engineer, I'm sharing the real metrics that 18 companies use to measure AI impact.
Thanks to all of these companies for letting me share their approaches, which gives us all a deeper look into AI adoption and impact in the real world (not just the headlines).
Read the full article here: https://lnkd.in/dX2ivkgw
While new KPIs are gaining importance, it is still clear that more traditional metrics such as the DORA4 remain highly significant. Investing part of your resources into analytics and insights never seems like a bad idea.
This week in The Pragmatic Engineer, I'm sharing the real metrics that 18 companies use to measure AI impact.
Thanks to all of these companies for letting me share their approaches, which gives us all a deeper look into AI adoption and impact in the real world (not just the headlines).
Read the full article here: https://lnkd.in/dX2ivkgw
Huge props to Laura Tacho for this incredible research - the effort behind collecting these metrics from 18 top companies is remarkable.
The most striking insight for me isn’t any single metric - it’s the lack of consensus on what actually matters between companies.
GitHub focuses on time savings and PR velocity. Google tracks code acceptance rates. Each company is essentially running a different experiment.
This reflects a fundamental truth about enterprise AI measurement: we’re all still figuring it out.
The companies getting this right aren’t chasing the “perfect” AI metric. They’re tracking 3-4 indicators that align with their specific engineering culture and business priorities. Microsoft cares about cycle time because they ship continuously. Vanguard tracks adoption rates because they’re in regulated finance where uptake matters more than raw speed.
Most importantly, notice how many companies track both quantitative impact (time saved, throughput) AND developer experience (CSAT, satisfaction). AI tools that make people faster but miserable don’t scale.
I’m confident that the principle works everywhere, not just for developers: we need to measure both the speed and the experience. Your future adoption depends on getting both right.
This week in The Pragmatic Engineer, I'm sharing the real metrics that 18 companies use to measure AI impact.
Thanks to all of these companies for letting me share their approaches, which gives us all a deeper look into AI adoption and impact in the real world (not just the headlines).
Read the full article here: https://lnkd.in/dX2ivkgw
I hear Laura's sadness! Apparently we’re back to measuring lines of code. What year is it again? 🙃
I LOVE this article and it deserves huge kudos! Not only is it exposing us all to approaches other than our own, but today it surfaced a hopeful trend: companies are mixing trusted metrics (throughput, failure rates) with fit-for-purpose ones (bad developer days, innovation ratio). Great, that balance avoids metrics envy and cookie-cutter reporting.
Another gem: the importance of keeping measurement iterative and experimental cannot be overstated. I've been thinking about questions lately that challenge some of our long standing beliefs. Are speed and thoroughness mutually exclusive? Or do we need to keep experimenting with how we operationalize these (especially now with agents) until we find how to hit an acceptable trade-off, given our current and upcoming AI tools? Plus, how can we capture more abstract value that is critical? For example, how do we measure gains in clarity, when an agent helps a PM ground an engineering team with prototypes that make the problem suddenly click? That might be the biggest win of all.
AI’s impact is one big developing story. Frameworks will shift and beliefs will get challenged. And the companies that thrive will be the ones willing to learn, unlearn, and adapt (not reporting lines of code like it’s 1999 😉).
Thank you Laura and Gergely for this great, informative write up that sparks reflection!
This week in The Pragmatic Engineer, I'm sharing the real metrics that 18 companies use to measure AI impact.
Thanks to all of these companies for letting me share their approaches, which gives us all a deeper look into AI adoption and impact in the real world (not just the headlines).
Read the full article here: https://lnkd.in/dX2ivkgw
Another great piece from Laura Tacho 🔥
I found this research very interesting and something I'm seeing directly when talking to Okteto customers about their AI adoption roadmaps.
Here are a few key takeaways that I took away from Laura's report:
1. Foundation first: companies succeeding with AI measurement already had strong developer productivity baselines. You can't measure AI impact if you don't know what "good" looked like before AI.
2. This is a balancing act: the most sophisticated teams track speed AND quality metrics together...think PR throughput alongside change failure rates.
3. Developer Experience remains key: Microsoft's has a "bad developer days" metric and Dropbox tracks a 90% adoption rate - the most telling indicator for us to keep an eye on is whether developers actually want to use these tools.
What metrics are you finding most valuable for measuring AI impact in your engineering teams?
#AI#DeveloperProductivity#DevEx
This week in The Pragmatic Engineer, I'm sharing the real metrics that 18 companies use to measure AI impact.
Thanks to all of these companies for letting me share their approaches, which gives us all a deeper look into AI adoption and impact in the real world (not just the headlines).
Read the full article here: https://lnkd.in/dX2ivkgw
How is Ai transforming your business?
Here's an interesting read from NBC on how businesses are using Ai for creative work, but not quite getting results that match the hype.
https://lnkd.in/gp8kgqme
AI is no longer a differentiator, but an expectation in agency reviews. This Ad Age article dives into how media agencies are getting ahead of client demands to deliver tangible AI applications and real outcomes. Featuring expert insights from our Global Chief Growth Strategy Officer, Katy Varner, the article highlights the shift towards data-driven personalization and the importance of guiding clients towards practical AI use cases.
Discover how to navigate the evolving AI landscape by checking out the full article: https://bit.ly/3V1nZB3#AI#AgencyReviews
Product Marketing at DX
5dhttps://newsletter.pragmaticengineer.com/p/how-tech-companies-measure-the-impact-of-ai