The Four Levels of AI Implementation: A Practitioner's Guide

The Four Levels of AI Implementation: A Practitioner's Guide

When I first dove deep into AI agents, I spent weeks deliberating over which agent framework to begin building with.

"Should I use LangChain? Maybe its graph-based evolution LangGraph? But what about CrewAI, a role-based multi-agent orchestration framework?

Wait, PydanticAI seems even better though, and comes from a highly regarded Pythonic team.

Then again, PocketFlow seems very interesting since it's so lightweight and simple for LLMs to implement..."

This folder contains the remnants of so many poor, incomplete, hallucinating agents.

I probably burned three weekends straight firing off deep research query after query, studying examples, reading discussions, re-building the same proof of concept in one framework after the next.

At the same time, when I actually wanted to get things done, I found myself continuously going back to something embarrassingly simple: copy and pasting different contexts into and out of Gemini 2.0 Flash Thinking.

That's when I realized I was applying the wrong interaction model to my use cases. What matters is understanding how AI should interact with your users and their data.

The framework barely matters. You get affordances for free. That's it.

After a year of building AI features, I group AI use cases into four primary patterns.

Most teams fail because they force the wrong pattern onto their problem.

They build chat interfaces for tasks that need simple data transformation. They create complex autonomous agents when users actually want to direct the analysis themselves. They implement sophisticated orchestration frameworks when the only way to reliably get all the necessary context is to indiscriminately dump it into your context window.

Here's the framework I use to match AI patterns to real product needs.

A) The Four Patterns That Actually Matter

Think of AI implementation patterns as interaction models, not technical architectures.

Two patterns put humans in control:

  • Conversational AI: Multi-turn dialogue where users refine and explore

  • Analytical AI: Deep analysis directed by human judgment

Two patterns run without human intervention:

  • Deterministic AI: Predictable transformations that always produce the same output

  • Autonomous AI: Background intelligence that operates independently

The key insight comes down to understanding who drives the interaction and how predictable the task needs to be. Your choice has less to do with the technology and more to do with the fundamental nature of the problem you're solving.

Pattern 1: Conversational AI (Human-Directed Chat Iteration)

What most people think of when they think of AI.

I define Conversational Chat AI as any system where users iterate through multi-turn interactions to reach their goal.

This goes beyond chatbots. It includes ReAct (Reason and Act) agents, tools like Claude Code, and any system where the AI maintains context across multiple exchanges and can reason and act in a loop before returning a response.

The defining characteristic is iteration: users refine their request based on AI responses, building toward a solution collaboratively with the agent.

When This Pattern Works

You need Conversational AI when the end goal isn't clear from the start. Think debugging code, refining creative work, or navigating complex decision trees.

Users know they have a problem but need to explore solutions iteratively.

The pattern shines when users need to course-correct based on intermediate results. This is a very clear way to insert a human-in-the-loop for validation to ensure the agent is acting properly. Each interaction builds on the last, creating a collaborative problem-solving session.

The Open-Ended Problem

Here's what you might miss about the open-endedness of chat UIs: they can be both a weakness and a superpower.

The weakness is obvious. When you present users with a blank chat interface, many freeze. They don't know what to ask, how to phrase requests, or what the system is capable of. The cognitive load is immense. You're essentially asking users to become prompt engineers to get value from your product.

But here's where it gets interesting, especially in the operations and internal tooling space.

Chat UIs absolutely shine when they serve as an alternative to bloated or otherwise difficult to use interfaces.

I've seen this pattern repeatedly: companies build internal tools over years, accumulating features and complexity. The UI becomes a maze of nested menus, obscure settings, and workflows that only power users understand. New employees take weeks to learn where everything lives.

Even if that chat UI is Github Copilot, a chat UI vs wrestling with Jira tickets and notifications is a breath of fresh air.

Enter a well-designed conversational interface with proper tool access. Suddenly, users can say "show me all customer complaints from last week about shipping delays" instead of navigating through three different systems and running manual filters.

They can ask "what's the status of order #12345 and why hasn't it shipped?" instead of checking four different screens.

Sure, you could do it yourself, but do you actually want to? Chat UIs don't need to offer groundbreaking functionality when they enable users to execute tasks they would otherwise skip.

The Circumvention Pattern

In ops products, conversational AI becomes a circumvention layer. Users bypass cluttered UIs entirely, going straight to what they need through natural language.

This works because the AI agent can:

  • Explore the entire workspace systematically

  • Connect data across disparate systems

  • Surface information users didn't know existed

  • Recall and surface tangential details and context humans forget or miss

  • Reason over an absurd amount of context in record time

Think about the last time you tried to find something in a complex enterprise system. You probably remembered some detail ("it was a customer in Texas... maybe their name started with M?") but couldn't navigate to it through the traditional UI. A conversational agent with proper tool access can take those fragments and find exactly what you need.

The biggest draw of chat UIs IMO is incredible functionality per inch of screen real estate.

One chat input can replace dozens of buttons, filters, and navigation elements, but only if you give the agent the right tools and context.

The Technical Reality

Honestly, I was obsessed with the idea of anything with a ReAct loop early on. The promise of agents that could reason, act, and observe felt like the future. And it absolutely is when applied to the right problems. I can't help but grin anytime Claude Code finds API documentation by itself and builds a functional integration all by itself.

If you're chatting with an AI skeptic who wears an Oura ring, this is how you get their attention.

The challenge isn't building these systems. Every framework from LangChain to PydanticAI to Google's Agent Development Kit offers ReAct implementations.

The challenge is knowing when to use them and how to properly equip them.

Most conversational AI fails because teams focus on the wrong aspects. They obsess over personality and clever responses while ignoring tool integration and context management. Your AI doesn't need to be witty. It needs to maintain state, access the right tools, and have graceful failure states so trust is maintained and the user can adjust their prompting approach.

For internal tools especially, success comes from:

  • Appropriate tool coverage (can the agent actually do what users need?)

  • Smart context management (does it understand the workspace?)

  • Intelligent exploration and contextual semantic mapping (can it find things users only vaguely remember?)

  • Clear capability communication and expectation (do users know what's possible? Does it fail gracefully?)

Most products fail to hit these four criteria. I'm guessing cost minimization (using too weak of a model or limiting its context window) and fragmented product teams (core agent, data, and tools all owned by separate squads) lead to a lot this.

At the same time, maybe it's fine to whiff on these if you need to build the muscle in your team. Models keep getting cheaper. If you build the infra and UX now, the models will get cheap and capable enough for your product to start working if the scaffolding around it is correct.

That being said, you risk losing user trust and engagement and that's hard to earn back.

Months ago, Gemini whiffed on this prompt 9/10 times. Claude + a Google MCP nailed it 10/10 times.

Given the state of most enterprise agent implementations, it's all too trivial to get superior performance with a basic Claude agent (i.e., Claude Desktop) and access to the standard API. Just have Claude one-shot an integration or an MCP server.

The Critical Question

Before building conversational AI, ask: "Would users even want to chat here?"

Perhaps more importantly, ask: "Is the current interface so complex that chat becomes a relief?"

I've seen teams force chat interfaces into simple workflows where users would rather just have a simple button that works. Maybe their workflow is too hard to describe and requires user's to react to and manipulate the AI.

In this case, chat just adds friction.

On the flip side, I've also seen chat interfaces transform unusable enterprise software into something new employees can use productively on day one.

The difference comes down to complexity and exploration.

If users know exactly what they want and the current UI provides it efficiently, don't add chat. If users struggle to find what they need or the UI has become a barrier to productivity, conversational AI might be your secret weapon.

B) Pattern 2: Analytical AI (Human-Directed Analysis)

I consider Analytical AI to be any system that processes large amounts of information to surface insights, with humans directing the analysis

In my opinion, this pattern is criminally underutilized, especially in enterprise software.

With models like Gemini 2.5 Pro offering massive context windows, you can dump entire datasets and get sophisticated analysis worth of a subject matter expert with unlimited time and energy. Models like OpenAI's o3 routinely deliver novel, hidden insights and valuable architectural recommendations.

I think most operational teams have more opportunities than they have time to pursue by sticking to intelligent application of analytical AI. "Agentic UI" is hot, but building smart context pipelines that routinely deliver actionable insights will yield most teams more than enough value to deliver ROI.

The Breakthrough Moment

I discovered this pattern's power naturally as reasoning models helped me get ahead of edge cases and dependencies in long PRDs with complex technical requirements and complex downstream dependencies and gotchas.

When we were faced with operational problems in the data my users struggled to main, I tried something simple: dumping all the data into Gemini with a well-crafted prompt.

A simulated example of something I tried a few months back with a dataset at Wonder.

Immediate root cause analysis and actionable insight.

This pattern works because modern reasoning models are incredibly good at finding patterns, anomalies, and insights when given sufficient context. Most people have no idea how capable these reasoning models have become in the last 9 months.

Now, naysayers will say that this isn't "true reasoning" and how I'm so easily fooled by hallucinations. That this is still just next token prediction, nothing more.

I don't care who you are, you're likely underestimating how capable these models are and how useful their outputs can be.

To that, I would say you're focusing on the wrong thing.

Look at the screenshot above. These claims are trivial to verify and offer immediate ROI to pursue. They collapse search time to zero. They are not replacing human judgment, they are simply pre-filtering and collapsing the search space of your human operator.

Stop fighting it. Nothing is 100% guaranteed. Take what's valuable and is spot on, disregard what seems off.

Let the model do what it's built for: reasoning and inferring the correct output over arbitrary data.

Single-Turn vs Multi-Turn Analysis

Analytical AI can work in two modes:

Single-turn analysis

Perfect for defined questions. Dump your data, ask your question, get your answer. This is ideal for reports, summaries, and one-off investigations.

Multi-turn analysis

Lets users direct exploration. They see initial insights, ask follow-up questions, dive deeper into specific areas.

This is harder to productize, it requires users who understand how to direct conversation, but it's transformative when used well.

This is more akin to how I use Gemini 2.5 Pro to explore and map out a problem and solution space in a PRD. It's going back and forth with Claude Code to plan a feature implementation then executing when you see it's framed properly.

The challenge with giving users multi-turn analytical AI is that it puts the cognitive load back onto them.

They need to know what questions to ask next. They need to know when to reject the output and regenerate.

But for power users who understand their domain, this pattern unlocks capabilities that would otherwise require a data science team.

Why Teams Miss This

Most teams jump straight to building chatbots or autonomous agents. They miss that sometimes you just need to throw data at a reasoning model and let it work.

In 2025, with models like o3 and Gemini 2.5 Pro offering absurd reasoning and context windows at very reasonable prices, this pattern has become even more powerful.

The reasoning capabilities are so strong that simple prompts over large contexts often outperform genuine agents that can act in a ReAct loop. This is especially so when a team is new to building agentic workflows.

C) Pattern 3: "Deterministic" AI (System-Directed Transformation)

Most people who use AI don't realize you can use LLMs in place of deterministic functions.

"Deterministic" AI (I use quotes as no LLMs are truly deterministic) transforms messy inputs into structured outputs.

This is the workhorse pattern, unglamorous but essential. It's AI as a better parser, categorizer, and structured data extractor.

Same general input schema, produce the same output schema.

Where This Shines

You need Deterministic AI when consistency matters more than creativity:

  • Parsing receipts into expense categories

  • Extracting structured data from documents

  • Routing support tickets

  • Tool calling

  • Workflow orchestrator that feeds values into other functions

The key is recognizing when you need reliability over flexibility. Users expect these systems to work the same way every time.

Implementation Realities

Success with Deterministic AI comes from constraints, not capabilities.

Set temperature close to zero. Use structured output modes or external libraries like Instructor; there's a decent amount of evidence that structured output modes diminish LLM performance.

Instructor is a popular Python library to reliably get structured outputs from your LLMs.

Validate everything.

This isn't about pushing the model's creative boundaries, it's about reliable execution.

The most successful Deterministic AI features are invisible.

Users don't know AI is involved. Forms fill themselves correctly, documents parse perfectly, data flows structured and clean.

D) Pattern 4: Autonomous AI (System-Directed Automation)

Autonomous AI operates independently in the background, handling tasks without human intervention.

Let's be clear about what this actually means.

  • It's not science fiction AGI. It's cron jobs with reasoning capabilities, perhaps in a multi-turn ReAct loop if you're getting fancy.

  • It's monitors that understand what they're watching.

  • It's workflows that can handle edge cases intelligently.

Personally, I don't find these all that exciting, but where these excel is output per input; users benefit from these without any time, energy, or cognitive investment.

Imagine these like the Roomba for your user's workspace, keeping things tidy, well maintained, perhaps surfacing insights to your users as a regular cadence.

The Pragmatic View

I think of Autonomous AI as ambient intelligence, it's just there, working in the background. A ReAct loop checking for anomalies every hour. An intelligent monitor that alerts only when something truly matters. A workflow that can route exceptions intelligently.

The key to successful Autonomous AI is tight scoping. These systems work when their domain is narrow and their actions are bounded. They fail when you try to make them general-purpose.

Why These Are Production-Ready

Autonomous AI is ready for production today.

But only if you scope it properly. A background agent monitoring for specific patterns in logs? Works great. A background agent updating and deleting data without checking in with the users, silently propagating errors without traceability? A disaster.

The teams succeeding with Autonomous AI aren't trying to build human replacements. They're building intelligent automation for specific, well-defined tasks.

Think of it as the evolution of rules engines, not the birth of artificial employees.

E) Engineered Context is a Prerequisite

Here's what I learned very quickly; you can't build any AI system if you can't reliably determine what context it needs.

"Context engineering" isn't just a technical detail. It's the foundation of successful AI implementation. Every pattern lives or dies based on how well you understand and are able to systematically structure the necessary context and feed it to your agent.

  • For Conversational AI, context is dynamic; building through the interaction.

  • For Analytical AI, context is comprehensive; including everything relevant.

  • For Deterministic AI, context is precise; only what's needed for the transformation.

  • For Autonomous AI, context is bounded; keeping the system focused and reliable.

Master context before you master patterns.

F) Choosing Your Pattern: A Practical Framework

Stop asking "Which AI framework should I use?" Instead, start asking:

Who drives the interaction?

  • If humans need control and iteration → Conversational or Analytical

  • If the system should run independently → Deterministic or Autonomous

How broad is the context?

  • Narrow, well-defined scope → Deterministic or Autonomous

  • Broad, exploratory scope → Conversational or Analytical

How predictable is the task?

  • Same inputs, same outputs → Deterministic

  • Variable inputs, reasoning required → Conversational, Analytical, or Autonomous

Most problems benefit from starting with multi-turn Conversational AI, aka multi-turn Analytical AI.

Dump your data, understand what you're really solving, then choose the right pattern for productization.

The Simple + SOTA Principle

Simple pattern plus state-of-the-art model beats complex pattern plus weaker model. Every time. Always use the latest model releases.

This seems obvious, yet I watch teams do the opposite daily. McKinsey released a report a few weeks ago mentioning ChatGPT 3.5.

In case you didn't know, GPT-3.5 Turbo is an excellent multiagent orchestration model. If you're looking to get into street racing, the Toyota Camry is also one of the fastest vehicles on the market.

Yes, ChatGPT 3.5. Perhaps this was a report that got stuck in the publishing queue.

Don't create complex pipelines and complex RAG solutions when all your context occupies only 5% of Gemini 2.5 Flash's massive context window.

The best AI implementations often look too simple. Claude Code recently shedded embeddings RAG and simply lets Claude crawl around and figure things out.

That's because the complexity went into understanding the problem, not engineering the solution.

G) The Stakeholder Reality

I've found technical stakeholders are often skeptical of simple AI solutions. They are distrustful of their non-deterministic nature and the inability to prevent edge cases and failures.

Business stakeholders on the other hand can be skeptical, but exuberantly excited when you find the right use case. They're consistently amazed when the right AI pattern solves their problem, regardless of the edge cases that may arise.

Your job as a builder is to bridge this gap.

Show technical stakeholders you don't need 100% reliability to deliver value. Show business stakeholders that AI can fundamentally transform some messy problem that seemed impossible to solve without it.

H) Start Here

If you're building AI features, here's your path:

  1. Identify your highest-friction workflow. Where do humans spend time on repetitive or complex tasks?

  2. Explore with Analytical AI first. Dump relevant data into a reasoning model. What insights emerge? What patterns do you see?

  3. Match the pattern to the problem. Use the framework above. Who drives the interaction? How broad is the context? How predictable is the task?

  4. Start simple. Use the most basic implementation of your chosen pattern. Complexity can come later if needed. Start with manual prompting. Work your way up.

  5. Iterate based on real usage. Users will show you what they actually need, which is rarely what they initially request.

The teams winning with AI aren't the ones with the most sophisticated architectures. They're the ones who deeply understand these patterns and match them correctly to real user needs.

Stop obsessing over frameworks. Start obsessing over interaction models.

The framework barely matters. The pattern everything.

-Brandon

To view or add a comment, sign in

Others also viewed

Explore content categories