Laura Tacho’s Post

View profile for Laura Tacho

CTO @ DX, Developer Intelligence Platform

This week in The Pragmatic Engineer, I'm sharing the real metrics that 18 companies use to measure AI impact. Thanks to all of these companies for letting me share their approaches, which gives us all a deeper look into AI adoption and impact in the real world (not just the headlines). Read the full article here: https://lnkd.in/dX2ivkgw

  • table

I love this comprehensive study, and I enjoyed that different companies are measuring it in their own way. I would like to break them into leading and lagging indicators. For example, DAU/WAU are leading indicators, but they don't necessarily show improvement in SDLC. Here are the lagging indicators that I'd focus on: - feature velocity - time-to-value for customer - false positives (are these tools merging PRs that should not have been, and how much churn it causes because of that) - code maintainability - developer CSAT For AI specifically, I'd like to see ROI and how that helps increase productivity. For example, $XX spent in AI tools and able to release features 20% faster.

Mikhail Konovalov

Staff Platform Engineer | Turning vibe-coded MVPs into production grade systems

4d

Great article. > engineers who regularly use AI merge 20% more pull requests each week Wondering if that may mean because of AI context limits egineers actually scope tasks into smaller ones and instead of having a 1 PR per task they have to merge 2,3 or even 5? And if that's the case would 20% more PRs mean actually a reduction in velocity?

Kevin Howren

CHIEF PRODUCT AND TECHNOLOGY OFFICER Scaling Teams and Transforming Organizations through Servant Leadership

4d

Very interesting, Laura Tacho. While some of these would be relatively easy to quantify, I'm curious how others are measured. I am guessing they are confident in their techniques as no measurement is better than bad measurement.

Really appreciative of this table and the work that went into this. That being said, the actual KPIs seem so underwhelming to me. Maybe I'm looking for magical silver bullets that don't exist -- honestly, this is most likely the case, but a lot of these scream "correlation, not causation" to me (ex: DAU increase due to pressure from management, not because devs actually benefit from using those features).

Vanessa Yuen

Engineering leader & manager

4d

That arrived in my inbox just now, I saw your name and got super excited! Best crossover episode ever! Can’t wait to dive in soon.

Like
Reply
Raj Jose

Senior Manager, Engineering Productivity and Quality Tooling | Scalability Testing

3d

Thanks for sharing Laura Tacho! Hours saved per developer and AI spent sounds very interesting and helpful.

Like
Reply
Andrew Ritchie

AWS Cloud Ally | 1M USD annual cost saves | Engineering Leader | Site Reliability & Platform Specialist | Speaker – KCD 2024 | Obsessed with DevEx & Delivery at Scale | Platform Engineering

4d

Thanks Laura Tacho I was discussing some of these ideas today, in particular not limiting ourselves to one specific tool.

Like
Reply
Christophe Pasquier

CEO at Slite.com (YC W18) & Super.work | Deep AI search & agents connected to all your work apps

2d

That's gold! I love to see stickiness being the most recurring metric here, we can feel the market being aware of false promises and looking at recurring usage first (it's also what I focus on as a provider) Thanks so much for building this report!

  • No alternative text description for this image
Like
Reply
Rich Delisser

VP Engineering at Cisco

4d

Great article Laura. I read it earlier today.

Very useful Laura Tacho and appreciate the questions from Gergely Orosz as always

See more comments

To view or add a comment, sign in

Explore content categories