Agentic AI: old architecture patterns with an AI twist

Agentic AI: old architecture patterns with an AI twist

Over the past year, “agentic AI” has become one of the most popular terms in the AI space. The idea of AI agents, autonomous systems that can plan, reason, and take actions on our behalf, captures the imagination. They are described as digital workers capable of chaining tasks together, using tools, and adapting to new situations.

But when you look under the hood, you quickly realise that most AI agents are built on the same software architecture patterns we have been applying for decades. The difference is not so much in the underlying structure, but in the way decisions are taken.

Article content
Basic architecture for Agentic AI

The familiar building blocks

Almost every AI agent framework today relies on patterns that software architects have been using for years:

  • Orchestration – A central component decides which action to take next, calling different services or functions in a controlled order. In AI agents, this is often a loop where the LLM generates the next “step” or “tool call.”
  • State management – Agents need memory. They must keep track of previous actions, context, and intermediate outputs. Whether this is done in a database, vector store, or in-memory cache, it is the same fundamental concept of maintaining application state.
  • Event-driven architecture – Many agents respond to triggers or changes. An event (a user input, an API response, a status update) can launch new tasks. Event-driven systems have been a core software design principle for decades.
  • Microservices and APIs – Agents rarely operate in isolation. They call APIs, interact with external tools, and delegate work to specialised services. This modular approach is simply microservices architecture applied in an AI context.

None of these ideas are new. We have been building orchestration engines, workflow systems, and event-driven applications for years. What is new is that an LLM now decides which action to take, instead of a hard-coded set of rules.

What is actually new

The shift brought by agentic AI is in decision-making. Traditionally, orchestration logic is explicit: developers define the flow of actions in advance.

With agentic AI, the orchestration loop often looks like this:

  1. The agent receives a goal or input.
  2. The LLM evaluates the current state and generates the next action.
  3. The system executes the action and updates the state.
  4. The loop repeats until the goal is reached.

This pattern is essentially a planning loop, something we have seen in robotics and AI research for decades. What has changed is the accessibility: LLMs can now serve as a general-purpose planner without requiring explicit programming of every decision path.

However, this flexibility comes with trade-offs. LLM-driven decision-making introduces unpredictability, latency, and cost concerns, and it requires guardrails to ensure reliability, security, and compliance.

Why it matters to see the bigger picture

Recognising that agentic AI builds on well-established software architecture is more than a semantic detail. It matters because:

  1. It helps teams stay grounded. AI is powerful, but it does not replace solid architecture. Good state handling, error recovery, observability, and modular design remain essential.
  2. It prevents vendor lock-in. If you understand that the agent is just orchestration plus AI-based planning, you can design the agent framework in a modular way and swap out the LLM or components later.
  3. It avoids reinventing the wheel. We already have excellent patterns for workflows, event buses, and service orchestration. AI should complement these, not replace them.
  4. It highlights where innovation is needed. The real challenge is not wiring APIs together, but ensuring safety, reasoning reliability, and trust in AI-driven decisions.

The bottom line

Agentic AI is not magic. It is a combination of well-known patterns: orchestration, state management, event-driven systems, and microservices, enhanced by a new decision-making layer powered by LLMs.

If we understand this, we can focus on designing systems that are robust, flexible, and maintainable while leveraging AI where it actually adds value. Instead of chasing hype, we can use decades of proven software architecture to build better, smarter solutions.

Want more insights like this? Subscribe to Autom8 to receive more digital transformation insights like this.

Koen van Eijk

Principal AI Engineer & Technical Founder | Agentic AI, Enterprise Automation & LLMs | Ex-Automagica (Acquired by Netcall)

13h

This is 100% spot on. In all the AI agents I've built for other companies and my own these patterns have paid off from the start. Doubly so in multi-agent setups.

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore topics