AI in Software Development: Productivity gains, but at what cost?
When Generative AI (GenAI) tools were first introduced to software development, the TTC Global team made a few key predictions. We anticipated that these tools would increase developer productivity by some measures, but that these gains would be offset by a rise in software quality issues. We also foresaw that certain types of defects would become more common and that the effort in software development would shift disproportionately toward maintenance rather than new feature development.
As my colleague Nate Custer often says, “There will be more changes made more often that are less well understood by the individuals making them—and that will end up being more difficult to maintain.” Now, emerging research confirms that GenAI is indeed transforming software development, and not always for the better. While AI-driven coding assistants are making developers faster, they are also introducing new inefficiencies, creating technical debt, and shifting the maintenance burden for software delivery.
The Productivity Paradox: More Code, Less Stability
The 2024 DORA report provides one of the most comprehensive examinations of AI's impact on software development to date. According to the findings, a 25% increase in AI adoption leads to a measurable 2.1% boost in individual developer productivity. However, this increase in output comes at a cost: delivery stability decreases by 7.2%, while delivery throughput drops by 1.5%. The report also reveals a troubling lack of trust in AI-generated code, with nearly 40% of developers expressing little or no confidence in it, and another approx. 35% trusting it only somewhat.
Despite these warning signs, AI adoption continues to accelerate. Organizations are eager to integrate AI into their development processes, with 81% of respondents stating that their companies are prioritizing AI-driven solutions. Meanwhile, 75.9% of developers now rely on AI for at least one of their daily tasks. Yet, despite this widespread adoption, the net effect on product performance has been negligible, with the data showing that for a 25% increase in AI adoption leads to only a 0.2% improvement in product performance. This raises a fundamental question: is AI genuinely improving software development, or is it simply changing the nature of its challenges?
The Death of Code Reuse: A Tenfold Increase in Duplicate Code Blocks
One of the core principles of software engineering, Don’t Repeat Yourself (DRY), is being eroded by AI-generated code. The AI Copilot Code Quality 2025 report highlights a stark shift in coding patterns. The rate of refactored or "moved" code—an indicator of code reuse—has plummeted from 25% in 2021 to less than 10% in 2024. Meanwhile, the frequency of duplicate code blocks has increased tenfold in just two years.
This shift suggests that AI is optimizing for short-term convenience rather than long-term maintainability. Instead of restructuring existing code into reusable modules, AI-generated code often duplicates logic across different parts of the codebase, introducing redundancy that makes future modifications more complex and error-prone.
While this presents a major software quality challenge, it also highlights a key advantage that human developers still hold over AI. As one report notes, "The essential advantage human programmers have over AI Code Assistants, circa 2024, is the ability to consolidate previous work into reusable modules."
The risks associated with this trend are not just theoretical. A 2023 study, Exploring the Impact of Code Clones on Deep Learning Software, found that 57.1% of co-changed cloned code blocks were involved in bugs. The researchers concluded that duplicated code contributes directly to a higher baseline of software defects, making it clear that unchecked AI-driven duplication is a liability for software quality.
The Code Churn Explosion: More Lines, More Problems
Beyond duplication, AI-generated code is also fueling an unprecedented rise in code churn—the percentage of code that is rewritten or discarded shortly after being introduced. The AI Copilot Code Quality 2025 report reveals a 26% year-over-year increase in churn, a sign that AI-generated code is not just inflating codebases but also proving unstable over time.
In a healthy development cycle, teams refactor and refine their code, creating stable, reusable components. But with AI-driven coding, much of what is written quickly becomes obsolete, forcing developers to engage in frequent rewrites and modifications. This constant churn drains engineering resources and makes it harder to maintain a stable product over time.
The False Promise of AI-Driven Productivity
AI is often marketed as a transformative force in software development, promising increased efficiency, lower costs, and faster delivery cycles. Yet the reality is proving more complex. AI does indeed make individual developers more productive, but this productivity does not necessarily translate into better software outcomes.
High-performing teams use AI, but product performance has seen little improvement. Instead, software delivery stability has worsened, with higher change failure rates and increased rework. Developers themselves remain skeptical, with trust in AI-generated code remaining alarmingly low.
The result is a paradox: while developers may feel more productive in the short term, the overall pace of stable, high-quality software delivery is not showing a meaningful improvement. The hidden costs of AI-driven development—duplication, churn, and rework—are accumulating, forcing teams to expend more effort fixing AI-generated inefficiencies.
AI Needs a Software Testing Safety Net
As AI continues to reshape software development, its impact on software quality cannot be ignored. The latest research from AI Copilot Code Quality 2025 and DORA 2024 presents a clear challenge: AI accelerates the writing of code, but it also introduces duplication, instability, and increased maintenance burdens.
To ensure that AI-driven development remains sustainable and maintainable, organizations must take proactive steps. Developers and engineering leaders must prioritize code refactoring and reuse to counteract AI’s tendency to produce redundant code. Software quality metrics must evolve beyond simple productivity measures to incorporate long-term maintainability and stability.
At the same time, software testing strategies must adapt. Automated testing, rigorous code reviews, and AI-driven quality analysis tools will be essential in identifying and mitigating the risks introduced by AI-generated code. Without these safeguards, organizations may find themselves drowning in a sea of duplicated, brittle, and ever-changing code.
AI will not replace human developers—but if left unchecked, it may make software development more chaotic and unstable. There continues to be improvements in GenAI tools, and as we get more research about the downstream impacts it is becoming clear that it is changing software delivery both in positive and negative ways.
As software testers we need to continue to adapt our approach and strategies to ensure that the quality of the systems delivered and the value they provide to users and customers remains high.
3x AI SaaS founder | Built in AI ➕ MarTech / GTM / media | PhD dropout from IISc
4moAI coding will increase technical debt in the short term, which will steeply decline the moment there's more AI written code than human written
Helping Businesses Scale with Custom Software & Web App Solutions | TecXra
4moAbsolutely! AI is a double-edged sword. It’s increasing productivity but could create new challenges in terms of code stability. A solid testing framework will be key to ensuring long-term software quality. Great to see this topic being discussed.
CEO and Founder at Webomates
4moAI is changing software development fast, and while concerns about quality are real, they show the need for better AI integration, not less AI use. The issue isn’t AI itself- it’s how we adapt our engineering and testing practices to make the most of it. Instead of seeing AI-generated code as a risk, we should look at it as a chance to rethink how we validate, refactor, and improve software. AI-assisted development isn’t about replacing best practices- it’s about adapting them. The real challenge isn’t just AI-generated defects; it’s making sure AI-driven workflows include solid testing, smart code reviews, and learning loops that help both AI and human developers improve over time. If we do this right, AI won’t just make coding faster- it will help us build better, more reliable software.
Enterprise RPA/AI Automation Architect
4moAI has transformed software development, boosting productivity but raising quality concerns. Quality Assurance (QA) remains vital throughout development to ensure products meet functional and user expectations. While AI accelerates coding, it’s prone to errors like misinterpreted requirements or overlooked edge cases, making QA essential for identifying flaws early and delivering reliable software. AI can also enhance testing by automating repetitive tasks, analyzing data, and predicting defects, allowing QA teams to focus on complex scenarios. However, human oversight is crucial for interpreting results and addressing nuanced issues. By integrating AI into development and QA, teams can achieve speed and efficiency without compromising quality.
Senior AI+ Technical Product Creator ~ Pioneering the AI+ Product Creation revolution
4moI agree, it's making it different. Now the code is written BY AI's FOR OTHER AI's. AI Isn’t Coming for Your Job — It’s Coming for Your Excuses. I keep seeing posts claiming: “AI will never replace real developers.” Meanwhile, I just: Automated a multi-step SMS campaign Wrote message templates with built-in compliance Parsed and normalized phone numbers Handled CSV encoding issues Logged and updated real-time data And debugged the whole thing… 📍 All in one session — with the help of my silicon-based assistant (shoutout to HAL 👋) who actually did all (and I mean ALL) the work. Let’s be real: AI isn’t here to replace skilled developers. It’s here to replace the boring 80% of what we do so we can focus on the 20% that actually matters: Creativity Strategy Communication Building solutions that help real people The ones saying “AI won’t replace us” are often the same ones writing CRUD apps and copying Stack Overflow. Sadly they just don't realize they've already been replaced