This week in The Pragmatic Engineer, I'm sharing the real metrics that 18 companies use to measure AI impact. Thanks to all of these companies for letting me share their approaches, which gives us all a deeper look into AI adoption and impact in the real world (not just the headlines). Read the full article here: https://lnkd.in/dX2ivkgw
Great article. > engineers who regularly use AI merge 20% more pull requests each week Wondering if that may mean because of AI context limits egineers actually scope tasks into smaller ones and instead of having a 1 PR per task they have to merge 2,3 or even 5? And if that's the case would 20% more PRs mean actually a reduction in velocity?
Very interesting, Laura Tacho. While some of these would be relatively easy to quantify, I'm curious how others are measured. I am guessing they are confident in their techniques as no measurement is better than bad measurement.
Really appreciative of this table and the work that went into this. That being said, the actual KPIs seem so underwhelming to me. Maybe I'm looking for magical silver bullets that don't exist -- honestly, this is most likely the case, but a lot of these scream "correlation, not causation" to me (ex: DAU increase due to pressure from management, not because devs actually benefit from using those features).
That arrived in my inbox just now, I saw your name and got super excited! Best crossover episode ever! Can’t wait to dive in soon.
Thanks for sharing Laura Tacho! Hours saved per developer and AI spent sounds very interesting and helpful.
Thanks Laura Tacho I was discussing some of these ideas today, in particular not limiting ourselves to one specific tool.
That's gold! I love to see stickiness being the most recurring metric here, we can feel the market being aware of false promises and looking at recurring usage first (it's also what I focus on as a provider) Thanks so much for building this report!
Great article Laura. I read it earlier today.
Very useful Laura Tacho and appreciate the questions from Gergely Orosz as always
I love this comprehensive study, and I enjoyed that different companies are measuring it in their own way. I would like to break them into leading and lagging indicators. For example, DAU/WAU are leading indicators, but they don't necessarily show improvement in SDLC. Here are the lagging indicators that I'd focus on: - feature velocity - time-to-value for customer - false positives (are these tools merging PRs that should not have been, and how much churn it causes because of that) - code maintainability - developer CSAT For AI specifically, I'd like to see ROI and how that helps increase productivity. For example, $XX spent in AI tools and able to release features 20% faster.