Building AI Agents with Docker MCP Toolkit: A Developer's Real-World Setup

Building AI Agents with Docker MCP Toolkit: A Developer's Real-World Setup

Author: Rajesh P

Building AI agents in the real world often involves more than just making model calls — it requires integrating with external tools, handling complex workflows, and ensuring the solution can scale in production.

In this post, we’ll walk through a real-world developer setup for creating an agent using the Docker MCP Toolkit.

To make things concrete, I’ve built an agent that takes a Git repository as input and can answer questions about its contents — whether it’s explaining the purpose of a function, summarizing a module, or finding where a specific API call is made. This simple but practical use case serves as a foundation for exploring how agents can interact with real-world data sources and respond intelligently.

I built and ran it using the Docker MCP Toolkit, which made setup and integration fast, portable, and repeatable. This blog walks you through that developer setup and explains why Docker MCP is a game changer for building and running agents.

Use Case: GitHub Repo Question-Answering Agent

The goal: Build an AI agent that can connect to a GitHub repository, retrieve relevant code or metadata, and answer developer questions in plain language.

Example queries:

  • “Summarize this repo: https://github.com/owner/repo
  • “Where is the authentication logic implemented?”
  • “List main modules and their purpose.”
  • “Explain the function parse_config and show where it’s used.”

This goes beyond a simple code demo — it reflects how developers work in real-world environments

  • The agent acts like a code-aware teammate you can query anytime.
  • The MCP Gateway handles tooling integration (GitHub API) without bloating the agent code.
  • Docker Compose ties the environment together so it runs the same in dev, staging, or production.

Role of Docker MCP Toolkit

Without MCP Toolkit, you’d spend hours wiring up API SDKs, managing auth tokens, and troubleshooting environment differences.

With MCP Toolkit:

  1. Containerized connectors – Run the GitHub MCP Gateway as a ready-made service (docker/mcp-gateway:latest), no SDK setup required.
  2. Consistent environments – The container image has fixed dependencies, so the setup works identically for every team member.
  3. Rapid integration – The agent connects to the gateway over HTTP; adding a new tool is as simple as adding a new container.
  4. Iterate faster – Restart or swap services in seconds using docker compose.
  5. Focus on logic, not plumbing – The gateway handles the GitHub-specific heavy lifting while you focus on prompt design, reasoning, and multi-agent orchestration.

Role of Docker Compose 

Running everything via Docker Compose means you treat the entire agent environment as a single deployable unit:

  • One-command startup – docker compose up brings up the MCP Gateway (and your agent, if containerized) together.
  • Service orchestration – Compose ensures dependencies start in the right order.
  • Internal networking – Services talk to each other by name (http://mcp-gateway-github:8080) without manual port wrangling.
  • Scaling – Run multiple agent instances for concurrent requests.
  • Unified logging – View all logs in one place for easier debugging.

Architecture Overview

Article content

  1. User Interaction

  • The developer runs the agent from a CLI or terminal.
  • They type a question about a GitHub repository — e.g., “Where is the authentication logic implemented?”

2. Agent Processing

  • The Agent (LLM + MCPTools) receives the question.
  • The agent determines that it needs repository data and issues a tool call via MCPTools.

3. MCPTools → MCP Gateway

  • MCPTools sends the request using streamable-http to the MCP Gateway running in Docker.
  • This gateway is defined in docker-compose.yml and configured for the GitHub server (--servers=github --port=8080).

4. GitHub Integration

  • The MCP Gateway handles all GitHub API interactions — listing files, retrieving

5. LLM Reasoning

  • The agent sends the retrieved GitHub context to OpenAI GPT-4o as part of a prompt.
  •  The LLM reasons over the data and generates a clear, context-rich answer.

6. Response to User

  • The agent prints the final answer back to the CLI, often with file names and line references.

Code Reference & File Roles

The detailed source code for this setup is available at this link

Rather than walk through it line-by-line, here’s what each file does in the real-world developer setup:

docker-compose.yml

  • Defines the MCP Gateway service for GitHub.
  • Runs the docker/mcp-gateway:latest container with GitHub as the configured server.
  • Exposes the gateway on port 8080.
  • Can be extended to run the agent and additional connectors as separate services in the same network.

app.py

  • Implements the GitHub Repo Summarizer Agent.
  • Uses MCPTools to connect to the MCP Gateway over streamable-http.
  • Sends queries to GitHub via the gateway, retrieves results, and passes them to GPT-4o for reasoning.
  • Handles the interactive CLI loop so you can type questions and get real-time responses.

In short: the Compose file manages infrastructure and orchestration, while the Python script handles intelligence and conversation.

Setup and Execution

  1. Clone the repository 

git clone https://github.com/rajeshsgr/mcp-demo-agents/tree/main

cd mcp-demo-agents        

2. Configure environment

Create a .env file in the root directory and add your OpenAI API key:

OPEN_AI_KEY = <<Insert your Open AI Key>>        

3. Configure GitHub Access

To allow the MCP Gateway to access GitHub repositories, set your GitHub personal access token:

docker mcp secret set github.personal_access_token=<YOUR_GITHUB_TOKEN>        

4. Start MCP Gateway

Bring up the GitHub MCP Gateway container using Docker Compose:

docker compose up -d         

5. Install Dependencies & Run Agent

python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
python app.py        

6. Ask Queries

Enter your query: Summarize https://github.com/owner/repo

Real-World Agent Development with Docker, MCP, and Compose

This setup is built with production realities in mind —

  • Docker ensures each integration (GitHub, databases, APIs) runs in its own isolated container with all dependencies preconfigured.
  • MCP acts as the bridge between your agent and real-world tools, abstracting away API complexity so your agent code stays clean and focused on reasoning.
  • Docker Compose orchestrates all these moving parts, managing startup order, networking, scaling, and environment parity between development, staging, and production.

From here, it’s easy to add:

  • More MCP connectors (Jira, Slack, internal APIs).
  • Multiple agents specializing in different tasks.
  • CI/CD pipelines that spin up this environment for automated testing

Final Thoughts

By combining Docker for isolation, MCP for seamless tool integration, and Docker Compose for orchestration, we’ve built more than just a working AI agent — we’ve created a repeatable, production-ready development pattern. This approach removes environment drift, accelerates iteration, and makes it simple to add new capabilities without disrupting existing workflows. Whether you’re experimenting locally or deploying at scale, this setup ensures your agents are reliable, maintainable, and ready to handle real-world demands from day one.

Before vs. After: The Developer Experience

Article content


This is a great example- I hadn't heard of Agno before.  I've got an Ollama container running locally and was able to figure out how to talk to that rather than OpenAI by changing the model in the agent to use Ollama:   agent = Agent(     name="Github Repo Summarizer",     role="summarize the github repository using available toolkit",     tools=[       MCP_TOOLKIT     ],     model=Ollama(id="gpt-oss:20b",             host="http://my-ollama-host:11434"),     show_tool_calls=True,     debug_mode=True,     add_datetime_to_instructions=True   )        I obviously had to add Ollama to the imports and put it into the req.txt, but it simply worked. Nice job with the tutorial!

Having an AI agent that can dig through your repo and actually answer questions is like finally having that all-knowing teammate—minus the coffee breaks and ambiguous TODO comments. Rajesh’s setup is the DevOps sidekick we all secretly wish for during sprint planning. For anyone dreaming up even more advanced agent connections, https://www.chat-data.com/ lets you build chatbots that do more than answer questions—they can trigger API actions, fetch details from multiple sources, and even escalate to a human when things get tricky. Whether it’s DevOps, customer support, or internal dashboards, connecting your agent to real business workflows just got a whole lot easier.

Like
Reply

This is awesome, but can it handle large repos?

To view or add a comment, sign in

More articles by Docker, Inc

Others also viewed

Explore content categories