The AI Acceleration of Our Efficiency Crisis: How LLMs Are Making Bad Software Worse
A curious thing happens when you mention that you built a website using plain HTML and CSS. Eyes narrow. Questions arise. "But how do you handle state management? What about component reusability? Don't you need a build pipeline?" The assumption is clear: if you're not using React, webpack, TypeScript, and a constellation of npm packages, you're either a novice who doesn't know better or a dinosaur stuck in the past.
This reaction reveals something deeper than mere technical preference—it exposes how completely we've lost the ability to recognize efficiency, or even to value it. When a simple, fast, maintainable solution appears primitive rather than elegant, we've fundamentally inverted our priorities. And now, with Large Language Models democratizing code generation, we're about to accelerate this crisis in ways that will make our current bloat look quaint.
The LLM Multiplication Effect
Large Language Models are arriving at precisely the wrong moment in software history. Just as we've trained an entire generation of developers to reflexively reach for complex solutions, we've handed them a tool that makes generating complexity trivial. The fundamental problem isn't that LLMs write bad code—it's that they're exceptionally good at writing the kind of code our industry has been rewarding: complicated, dependency-heavy, "enterprise-ready" solutions that work but consume orders of magnitude more resources than necessary.
Ask an LLM to create a simple blog, and it will confidently generate a full-stack application with React, Node.js, Express, MongoDB, user authentication, state management, API layers, and Docker containerization. The same task that could be solved with a few HTML files and perhaps a static site generator becomes a software ecosystem requiring ongoing maintenance, security updates, and enough server resources to power a small town.
This isn't a bug in LLM behavior—it's a feature. These models trained on the collective output of an industry that has spent two decades systematically replacing simple solutions with complex ones. They've learned that "professional" software development means using sophisticated tools, following enterprise patterns, and implementing robust architectures. They haven't learned to value efficiency because we haven't been teaching it.
The multiplication effect is already visible. Developers report generating 2-10x more code per day using AI assistance. But if the baseline code was already bloated, and the AI amplifies that bloat, we're looking at exponential complexity growth. We're automating inefficiency at scale.
The Great Forgetting
Perhaps more troubling than the volume of generated code is what we're forgetting how to do. Efficiency isn't just about knowing optimization tricks—it's about developing intuition for the relationship between problems and solutions, understanding the true cost of abstractions, and maintaining the discipline to choose simplicity when complexity isn't warranted.
These skills atrophy through disuse. When you can generate a working solution in minutes, why spend hours understanding the problem deeply enough to craft an elegant one? When frameworks handle all the complexity, why learn how HTTP actually works? When ORMs abstract away databases, why understand SQL optimization?
The demoscene community—those programmers who create stunning audiovisual experiences in 64KB or less—represents a living museum of these lost skills. They know how to read assembly output from compilers, how to exploit hardware characteristics for performance, how to compress algorithms into impossible spaces. But mention these techniques to most professional developers, and you'll get the same skeptical look as the plain HTML comment. "Why would you need to know that?"
The answer becomes clear when you consider that a typical modern web page is larger than the entire Doom game, yet provides a tiny fraction of its functionality. We've normalized this absurdity by systematically devaluing the skills needed to recognize and fix it.
Modern computer science education compounds the problem. Students learn big-O notation but never profile real code. They study algorithms but implement them in high-level languages with garbage collection, never seeing the actual cost. They're taught to think in terms of frameworks and libraries, not fundamental operations. The curriculum treats performance optimization as an advanced elective rather than a core competency.
Industry hiring practices complete the cycle. Technical interviews focus on algorithm puzzles rather than system efficiency. Code reviews check for pattern compliance rather than resource usage. Performance engineering roles exist in isolation rather than being everyone's responsibility. We've created an entire profession that sees efficiency as someone else's job.
The Incentive Inversion
The most pernicious aspect of our efficiency crisis isn't technical—it's economic. The software industry has evolved incentive structures that systematically reward complexity and punish simplicity. Understanding these misaligned incentives explains why LLMs will accelerate rather than solve our bloat problem.
Consider the career implications of choosing simple solutions. A developer who uses plain HTML gets questioned about their technical sophistication. A developer who builds a microservices architecture with Kubernetes gets promoted. The message is clear: complexity signals competence, simplicity suggests limitations.
This extends to every level of the organization. Teams are evaluated on feature velocity, not efficiency. Startups raise funding based on technical complexity indicators—microservices, machine learning, cloud-native architectures—rather than actual user value delivered per computational resource. Consultants bill more hours for complex implementations. Tool vendors sell more licenses for elaborate development stacks.
LLMs slot perfectly into these dynamics. They allow teams to appear highly productive by generating massive amounts of code quickly. They enable the rapid construction of impressively complex architectures. They provide technical sophistication on demand. All the incentives that created our current bloat crisis now operate at AI speed.
The productivity paradox deepens. Organizations report that AI-assisted development allows them to ship features faster than ever. But users report that software feels slower, consumes more resources, and breaks in mysterious ways. The disconnect isn't accidental—it's the predictable result of optimizing for the wrong metrics.
The Infrastructure Arms Race
Our tolerance for inefficiency has created an entire economy dedicated to scaling around software bloat rather than eliminating it. Cloud computing, containerization, CDNs, database optimization services, performance monitoring tools—billion-dollar industries exist primarily to manage the consequences of inefficient software.
This infrastructure arms race accelerates with AI assistance. Instead of writing more efficient code, we'll generate more sophisticated infrastructure to run inefficient code faster. Kubernetes clusters will spawn auto-scaling pods to handle the resource demands of bloated applications. Machine learning models will predict when to provision additional cloud resources to compensate for poor algorithms. We'll automate complexity management rather than complexity reduction.
The economic logic is perverse but consistent. It's cheaper to add more servers than to optimize code, especially when the optimization requires rare skills and the servers can be provisioned instantly. Cloud vendors profit from inefficiency—their revenue grows with resource consumption. DevOps tool companies profit from complexity—their products become more valuable as systems become harder to manage.
LLMs threaten to create a new category of infrastructure: AI-powered development tools that generate, optimize, and manage the complexity they help create. We'll have AI systems monitoring other AI systems managing the output of AI-generated code running on AI-optimized infrastructure. Each layer adding overhead, each abstraction creating new failure modes, each solution requiring new solutions.
The Cognitive Load Crisis
The human cost of our complexity explosion extends beyond performance metrics. Every additional abstraction layer, framework, and tool adds to the cognitive load required to understand and maintain systems. We're approaching the limits of what individual developers can comprehend, leading to increasing specialization and decreasing system-wide understanding.
LLM-assisted development exacerbates this problem in subtle ways. When you generate code you don't fully understand, you create maintenance burdens for your future self and your teammates. When systems become too complex for any individual to comprehend completely, debugging becomes archaeological—digging through layers of generated abstractions to find the source of unexpected behavior.
The documentation problem compounds exponentially. Traditional software development at least produces artifacts like requirements, design decisions, and implementation notes that future maintainers can reference. LLM-generated code often comes with minimal context about why particular approaches were chosen or what alternatives were considered. The generated solution works, but the reasoning disappears.
This creates a new form of technical debt: comprehension debt. Systems that work but cannot be fully understood by their maintainers accumulate like time bombs. When they fail—and complex systems always eventually fail—the debugging process requires reverse-engineering not just the implementation but the reasoning that led to it.
The Monoculture Problem
LLMs trained on the same corpus of existing code will tend to generate similar solutions to similar problems. This creates a dangerous monoculture where everyone builds systems the same way, using the same patterns, making the same assumptions, and inheriting the same weaknesses.
The diversity of programming approaches—different languages, paradigms, architectural styles—historically provided resilience. When one approach proved flawed, alternatives existed. When performance requirements changed, different trade-offs could be explored. This diversity required individual expertise and institutional knowledge that took years to develop.
AI-assisted development threatens to collapse this diversity into a single, dominant approach: whatever patterns the LLMs learned from their training data. If the training corpus emphasized certain frameworks, architectures, or libraries, those become the default solutions for everyone. Innovation happens within narrow bounds rather than exploring fundamental alternatives.
The feedback loop accelerates the monoculture. As more AI-generated code enters public repositories, future AI models train on increasingly homogeneous examples. The collective knowledge base converges on a single way of solving problems rather than maintaining healthy diversity.
Recommended by LinkedIn
The Death of Constraints
Constraints force creativity. When memory was expensive, programmers learned to be resourceful. When bandwidth was limited, protocols were designed efficiently. When processing power was scarce, algorithms were optimized carefully. These constraints produced elegant solutions that did more with less.
Modern development has systematically removed constraints. Memory is cheap, so we don't optimize data structures. Bandwidth is plentiful, so we don't compress assets. Processing power is abundant, so we don't profile algorithms. Storage is infinite, so we don't clean up technical debt. The removal of constraints removed the forcing function for efficiency.
LLMs complete this process by removing the constraint of human development time. When generating code costs nearly nothing, why optimize existing code? When spinning up new services is trivial, why consolidate functionality? When adding dependencies is effortless, why understand what they do?
The demoscene thrives specifically because it maintains artificial constraints. Size limits force programmers to question every byte. Real-time requirements demand efficient algorithms. Competition rewards elegance over feature completeness. These constraints produce software that would seem magical to developers accustomed to modern bloat—complex behaviors emerging from minimal code, sophisticated effects running smoothly on limited hardware.
The Efficiency Extinction Event
We're witnessing what may be the final generation of programmers who remember how to optimize. The developers who learned assembly language, who debugged with oscilloscopes, who measured cycles and bytes—they're retiring. The knowledge they accumulated through decades of working with constrained resources is walking out the door.
This wouldn't matter if we were training replacements, but we're not. New developers learn React before they learn JavaScript. They use Docker before they understand processes. They deploy to Kubernetes before they've managed a single server. Each abstraction layer they start with moves them further from the fundamentals that enable optimization.
LLMs accelerate this knowledge extinction by making fundamental understanding seem unnecessary. Why learn how databases work when you can generate SQL queries? Why understand networking when you can scaffold microservices? Why study algorithms when you can generate implementations? The immediate productivity gains mask the long-term competency loss.
The historical parallel is striking. Medieval Europe lost the ability to build Roman-style concrete structures, not because the knowledge was theoretically impossible to recover, but because the practical skills required to apply it had disappeared. Similarly, we're losing the practical skills required to build efficient software, even though the theoretical knowledge remains accessible.
The Plain HTML Heresy
When someone builds a website using plain HTML and CSS, the skeptical reaction reveals our industry's inverted values. The simple solution appears primitive not because it lacks functionality, but because it lacks complexity. We've trained ourselves to associate sophistication with elaborateness rather than elegance.
This perception problem has real consequences. Simple solutions get rejected in favor of complex ones not because they're inadequate, but because they appear inadequate to stakeholders who've learned to equate technical sophistication with business value. The blog that loads instantly and costs pennies to host loses to the React application that requires expensive infrastructure and ongoing maintenance.
The irony is that the simple solution often provides superior user experience. Plain HTML loads faster, works more reliably, and remains accessible across a wider range of devices and network conditions. But these benefits are invisible to decision-makers who've learned to evaluate technology choices based on complexity metrics rather than user outcomes.
LLMs will reinforce this bias by making complex solutions easier to generate than simple ones. Ask for a blog, get a full-stack application. Ask for a form, get a component library. The path of least resistance leads through maximum complexity.
The Resource Abstraction
Modern developers have largely lost touch with the physical reality of computation. Code runs "in the cloud" rather than on specific machines. Databases are "serverless" rather than running on hardware with limitations. Applications scale "automatically" rather than consuming finite resources.
This abstraction enables tremendous productivity but obscures the real costs of software choices. When you don't see the server bills, you don't feel the impact of inefficient queries. When you don't manage the hardware, you don't understand the relationship between code and energy consumption. When scaling happens automatically, you don't experience the consequences of algorithms that don't scale.
LLMs complete this abstraction by removing the human effort required to generate inefficient code. The traditional constraint—developer time—that somewhat limited bloat has disappeared. We can now generate arbitrarily complex solutions without feeling the cost until runtime.
The environmental implications are staggering. Software energy consumption grows exponentially while hardware efficiency improvements follow linear trends. Every inefficient algorithm multiplied across millions of users represents megawatts of wasted energy. But these costs remain externalized, invisible to the developers making the choices that create them.
The Competency Inversion
Perhaps the most troubling aspect of our efficiency crisis is how it's inverted our understanding of technical competency. Skills that produce better software—understanding systems deeply, optimizing for constraints, choosing appropriate tools—are increasingly seen as outdated specializations rather than core competencies.
Meanwhile, skills that produce more complex software—orchestrating microservices, managing containerized deployments, integrating numerous frameworks—are seen as cutting-edge expertise. We've confused complexity management with technical sophistication, activity with accomplishment.
LLMs threaten to complete this inversion by making complexity generation effortless while leaving optimization skills as difficult as ever. The developer who can prompt an AI to generate a complex architecture appears more productive than the developer who carefully crafts a simple solution. The metrics we use to measure productivity—lines of code, features shipped, services deployed—all favor the complex approach.
The Way Forward
Breaking free from this crisis requires recognizing it as a systems problem rather than a technical one. Individual developers choosing better tools won't solve systemic incentive misalignment. Organizations optimizing local metrics won't address global inefficiency. The solution requires coordinated changes across education, hiring, evaluation, and tooling.
Educational reform must prioritize understanding over frameworks. Students should learn to measure and optimize before they learn to orchestrate and deploy. Computer science curricula should treat efficiency as a core competency rather than an advanced elective. The goal isn't to return to assembly language but to develop intuition for the relationship between code and resources.
Industry practices must evolve to reward efficiency alongside functionality. Performance budgets should be as common as financial budgets. Code reviews should evaluate resource usage alongside correctness. Career advancement should recognize optimization skills alongside feature delivery. We need metrics that capture total cost of ownership rather than just development velocity.
Tool design must make efficient choices easier rather than harder. LLMs could theoretically help by generating optimized implementations rather than just working ones. Development environments could provide real-time feedback about resource consumption. Deployment platforms could make efficiency metrics as visible as functionality metrics.
The demoscene offers a model for maintaining efficiency skills within communities that value them. Size competitions, performance challenges, and optimization contests provide motivation for developing and sharing efficient techniques. Professional development could adopt similar approaches—hackathons with resource constraints, performance competitions, efficiency showcases.
The Choice Ahead
We stand at an inflection point. LLMs could accelerate our descent into unsustainable complexity, or they could become tools for rediscovering efficiency. The outcome depends on the choices we make about how to integrate AI assistance into development workflows.
If we use LLMs to generate more complex solutions faster, we'll create an efficiency crisis that no amount of hardware improvement can solve. Applications will become slower, more fragile, and more expensive to operate. The skills needed to fix these problems will continue to atrophy until recovery becomes impossible.
Alternatively, we could use LLMs to amplify human judgment rather than replace it. AI assistance for optimization rather than just generation. Tools that help us understand tradeoffs rather than obscure them. Systems that make efficient choices easier rather than hiding them behind abstractions.
The plain HTML website that triggers skepticism should trigger recognition instead—recognition that simplicity often represents the highest form of sophistication, that efficiency is a feature worth optimizing for, and that the most impressive code is often the code that isn't there.
Our industry's future depends on recovering these values before the last generation that remembers them retires. The choice is ours, but the window is closing. In a world where generating complexity costs nothing, the ability to recognize and create simplicity becomes the rarest and most valuable skill of all.
The question isn't whether AI will change how we write software—it already has. The question is whether we'll use it to become better programmers or just more productive at creating problems for our future selves to solve.
I wholeheartedly agree. When FORTRAN was invented as a substitute for machine language, there was a real reduction in complexity. With some of the latest frameworks in Machine Learning, I no longer feel that this is the case. Instead, I'd rather stick with a "lower" layer of plain Python. All the more so, as these are not "learn once, apply forever" tools. Au contraire, they keep getting "improved" and by the time they're stable, they are replaced by the latest craze.
We can use AI to rewrite. AI doesn't need frameworks and a lot of abstractions to achieve the same result. We probably need at this point to invest in validation of outcomes rather than wanting the AI to emulate a programmer
I've met a very few people who want to understand a system and they do understand it. And what they find are piles of technical debt resulting from chasing deadlines and business goals. Some even make a profit on this by selling optimizations. So, the effort you put on understanding must pay, otherwise it becomes a hobby or science rather than business.