AI acceptance rate: easy to measure, easy to misuse
When generative AI coding tools like GitHub Copilot first launched, we needed a simple way to answer a basic question: do these tools actually work? In that context, acceptance rate (how often developers accept an AI-generated code suggestion) offered an appealing early signal. It was easy to track and seemed to show whether suggestions were useful. If developers don't accept suggestions, it's a sign the tool's accuracy is off.
But that era is over. We now know that AI coding assistants help developers solve problems, and that developers like to use them. The question is no longer do they work; it’s how well, and for whom, and where is the most value being created? This is where acceptance rate falls apart. I see it as the new "lines of code" measurement: easy to measure, easy to misuse, and largely irrelevant to business value or team productivity.
Unfortunately, some teams still over-index on acceptance rate simply because it’s accessible. It’s built into dashboards and can be compared across orgs. But that convenience is dangerous. As a performance signal, it tells you nothing about long-term impact, developer satisfaction, or actual business outcomes. While it does have role -- like during tool evaluation -- its value is bounded.
Instead, keep close eye on your existing software engineering performance metrics: stuff like speed, quality, innovation rate, and developer experience. Then, layer on AI-specific metrics across the dimensions of utilization, impact, and cost alongside of these metrics. This gives you the fullest picture of what's happening, and helps avoid some of the tunnel vision that seems to be plaguing a lot of discussion about AI impact in the news.
AI is an amplifier for existing processes. It can unlock a tremendous amount of value if your systems are ready, but it will also cause a lot of problems (poor quality, maintenance issues, security risks) if your systems are not already sufficiently resilient. If you do not have good visibility into system and team performance already, the risk is greater.
Acceptance rate may help validate early experiments with AI coding tools, but it’s no longer the metric that matters. To truly understand and unlock the value of AI in software development, teams need to focus on meaningful signals that reflect real usage, real impact, and real investment.
no bullsh*t security for developers // partnering with universities to bring hands-on secure coding to students through Aikido for Students
2moReally interesting question! I’ve seen acceptance rate get celebrated as a north-star metric for AI-assisted coding or content, but it often hides nuance: for example, a high acceptance rate could just mean folks are blindly clicking “accept” to move faster, rather than genuinely integrating AI into their problem-solving process. Another danger: it doesn’t capture long-term impact, did that suggestion reduce cognitive load, improve code quality, or accelerate time to deploy? I’d love to see more teams pair acceptance rate with deeper qualitative signals (e.g., reverts, follow-up edits, incidents tied to AI-suggested changes) and even developer satisfaction surveys. Have you seen anyone successfully combine acceptance rate with a more holistic “value delivered” metric?
AI assisted modernization architect | CNCF DevEx TAG Co-Chair
2moAbsolutely agree, Laura. It’s encouraging to see more voices challenging overused metrics and pushing the conversation toward meaningful signals. I recently shared similar reflections in an article on AI-assisted development and the gap between what’s easy to track and what actually drives value. It touches on some of the same themes—especially how early-stage adoption is more about learning, alignment, and enabling curiosity than performance dashboards. If you’re curious, I’d love your thoughts: https://www.linkedin.com/pulse/ai-assisted-development-episode-1-mona-borham-wtczf
CTO | SVP Engineering
2moIt feels a lot like measuring lines of code committed. But worse. And it doesnt feel like a useful metric as part of a balanced metric set in the way that measuring PRs per Engineer is. I feel like a common problem with AI metrics is that they need to be more complex to be of value (“AI acceptance rate where that code did not later prove to be subtractive, destructive or require substantial rework”). Maybe we need AI to analyse the data coming out of our attempts at AI assisted coding metricisation…
VP of Engineering at Interplay Learning
2moI actually presented a version of acceptance rate to our all hands last week. Not because I liked the metric but because I believe it to be closer to what the headlines are presenting right now. Comparing apples to apples feels like the right thing to do in the current moment to the wide audience. However I then went on to show some different things around delivering value and reducing developer toil.