Architecting with AI
Software Architecture doesn't change, but the software that's architected does
August 01, 2024

Mike Loukides
O'Reilly Media

Software architecture is a complex process. It involves talking to stakeholders to determine what's needed; talking to developer teams to determine their capabilities; and much more. Finances come into play: what will it cost to build and operate the software product? What kind of return on investment is expected; and is the product likely to achieve that?

Politics comes into play: who at the company is really calling the shots, who is really making the technical decisions?

Users come into play: what do they really need and what problems are they really facing?

What a company wants might not be what it needs, and it usually falls to the architect to uncover these hidden needs and get them into the project plans. Much of this happens before writing a single line of code or architectural diagram, and it continues throughout the project's lifecycle.

So to discuss the role of AI in software architecture, we have to think about the whole process, not just code generation or creating documentation like architectural diagrams. We all know that AI has gotten quite good at generating code. There are people experimenting with using AI to UML and C4 diagrams. AI will definitely play a role here. But I don't expect AI to have much of an impact on the rest of the process, which involves understanding and negotiating all of those human issues.

Why? Think about what these tasks really require. A human can walk into a meeting and size up the politics fairly quickly. Who is running things, who is getting their ego stroked while the people who really control the agenda head in a different direction, whose opinions count, whose opinions don't. AI has yet to prove itself good at extracting nuance from a conversation. AI can make a meeting transcript, but it can't understand priorities and determine which is really important, particularly when the real priorities may involve agendas that nobody mentions explicitly. It can help to craft an application given a statement of design goals, but it can't infer those goals, particularly when some of the requirements are unstated (or even unknown).

Once you've got a start on the information gathering, you have to start designing — and that isn't just UML and C4. Designing is all about tradeoffs. We like to think it's about coming up with the best solution, but I've become fond of quoting Neal Ford: it's all about the least worst solution. That's reality.

What can we afford to build?

How much will it cost to run this in the cloud versus on premises?

Can we deliver good performance for the loads we expect with a budget we can afford?

How do we deal with the political struggle between cloud and on-premises advocates?

These questions don't have unambiguous answers; they all involve tradeoffs, and I've yet to see evidence that AIs are good at making tradeoffs.

I'll take that a step further. I don't know how you would acquire the data needed to train an AI to make those tradeoffs. Every project is different, and understanding the differences between projects is all about context. Do we have documentation of thousands of corporate IT projects that we would need to train an AI to understand context? Some of that documentation probably exists, but it's almost all proprietary. Even that's optimistic; a lot of the documentation we would need was never captured and may never have been expressed.

Another issue in software design is breaking larger tasks up into smaller components. That may be the biggest theme of the history of software design. AI is already useful for refactoring source code. But the issues change when we consider AI as a component of a software system. The code used to implement AI is usually surprisingly small — that's not an issue. However, take a step back and ask why we want software to be composed of small, modular components. Small isn't "good" in and of itself. Small components are desirable because they're easier to understand and debug. Small components reduce risk: it's easier to understand an individual class or microservice than a multi-million line monolith.

There's a well-known paper that shows a small box, representing a model. The box is surrounded by many other boxes that represent other software components: data pipelines, storage, user interfaces, you name it. If you just count lines of code, the model will be one of the smallest components. But let's change the perspective by stepping back: what if the box size represented risk? Now that tiny box representing the model suddenly grows much larger. AI models are black boxes, and always unpredictable. If we really want to minimize risk rather than just lines of code, AI is problematic. That risk doesn't mean AI should be avoided, but it does add complications: applications need to incorporate guardrails to prevent the AI from behaving inappropriately, developers need to run evaluations that test whether the system as a whole is achieving its aims — and software architects will play a big role to design these additional components. They will have a crucial role in determining what the risks are and how to address them.

We're already seeing tools for generating transcripts and summaries of meetings, and those are undoubtedly useful. But it will be a while before we have tools that can understand the context for a software project, process all the issues that are raised by a software project, and make the tradeoffs that are at the heart of engineering. Software development isn't just about grinding out code. It's about understanding context and solving problems that make sense within that context.

Mike Loukides is VP of Emerging Tech at O'Reilly Media
Share this

Industry News

August 12, 2025

Check Point® Software Technologies Ltd. has been recognized as a Leader and Outperformer for its Harmony Email & Collaboration security solution in GigaOm’s latest Radar for Anti-Phishingreport.

August 11, 2025

Aqua Security, the primary maintainer of Trivy, announced that Root has joined the Trivy Partner Connect program.

August 06, 2025

GitLab signed a three-year, strategic collaboration agreement (SCA) with Amazon Web Services (AWS).

August 06, 2025

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the schedule for KubeCon + CloudNativeCon North America 2025, taking place in Atlanta, Georgia, from November 10–13, 2025.

August 05, 2025

Google Cloud announced a complete toolkit to help developers build, deploy, and optimize A2A agents.

August 05, 2025

ArmorCode announced significant application security and remediation advancements to help customers address risks posed by AI-generated code and applications, along with imminent compliance demands from regulations including the Cyber Resilience Act (CRA).

August 05, 2025

Black Duck Software announced significant enhancements to its AI-powered application security assistant, Black Duck Assist™, which is now directly integrated into the company's Code Sight™ IDE plugin.

August 04, 2025

Check Point's CloudGuard WAF global footprint has expanded with 8 new points of presence (PoPs) in recent months.

August 04, 2025

Apiiro launched its AutoFix Agent: an AI Agent for AppSec that autofixes design and code risks using runtime context – tailored to your environment.

August 04, 2025

Snyk announced the immediate availability of Secure At Inception, which consists of three new innovations focused on Model Context Protocol (MCP) technology.

July 31, 2025

Backslash Security announced that its platform for securing AI coding infrastructure and code will be shown at the AI Pavilion (booth #4312) at Black Hat USA in Las Vegas, August 6-7.

July 31, 2025

Salt Security announced the launch of Salt Surface, a new capability integrated into its API Protection Platform.

July 31, 2025

Wallarm announced the launch of its next-gen Security Edge offering, delivering the benefits of edge-based API protection to more teams, in more environments, with more control.

July 31, 2025

DefectDojo announced new automated Known Exploited Vulnerabilities (KEV) data enrichment features for DefectDojo Pro.

July 30, 2025

Temporal Technologies is launching a new integration with the OpenAI Agents SDK: a provider-agnostic framework for building and running multi-agent LLM workflows.