Building AI Agents with Docker MCP Toolkit: A Developer's Real-World Setup
Author: Rajesh P
Building AI agents in the real world often involves more than just making model calls — it requires integrating with external tools, handling complex workflows, and ensuring the solution can scale in production.
In this post, we’ll walk through a real-world developer setup for creating an agent using the Docker MCP Toolkit.
To make things concrete, I’ve built an agent that takes a Git repository as input and can answer questions about its contents — whether it’s explaining the purpose of a function, summarizing a module, or finding where a specific API call is made. This simple but practical use case serves as a foundation for exploring how agents can interact with real-world data sources and respond intelligently.
I built and ran it using the Docker MCP Toolkit, which made setup and integration fast, portable, and repeatable. This blog walks you through that developer setup and explains why Docker MCP is a game changer for building and running agents.
Use Case: GitHub Repo Question-Answering Agent
The goal: Build an AI agent that can connect to a GitHub repository, retrieve relevant code or metadata, and answer developer questions in plain language.
Example queries:
This goes beyond a simple code demo — it reflects how developers work in real-world environments
Role of Docker MCP Toolkit
Without MCP Toolkit, you’d spend hours wiring up API SDKs, managing auth tokens, and troubleshooting environment differences.
With MCP Toolkit:
Role of Docker Compose
Running everything via Docker Compose means you treat the entire agent environment as a single deployable unit:
Architecture Overview
2. Agent Processing
3. MCPTools → MCP Gateway
4. GitHub Integration
5. LLM Reasoning
6. Response to User
Code Reference & File Roles
The detailed source code for this setup is available at this link.
Recommended by LinkedIn
Rather than walk through it line-by-line, here’s what each file does in the real-world developer setup:
docker-compose.yml
app.py
In short: the Compose file manages infrastructure and orchestration, while the Python script handles intelligence and conversation.
Setup and Execution
cd mcp-demo-agents
2. Configure environment
Create a .env file in the root directory and add your OpenAI API key:
OPEN_AI_KEY = <<Insert your Open AI Key>>
3. Configure GitHub Access
To allow the MCP Gateway to access GitHub repositories, set your GitHub personal access token:
docker mcp secret set github.personal_access_token=<YOUR_GITHUB_TOKEN>
4. Start MCP Gateway
Bring up the GitHub MCP Gateway container using Docker Compose:
docker compose up -d
5. Install Dependencies & Run Agent
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
python app.py
6. Ask Queries
Enter your query: Summarize https://github.com/owner/repo
Real-World Agent Development with Docker, MCP, and Compose
This setup is built with production realities in mind —
From here, it’s easy to add:
Final Thoughts
By combining Docker for isolation, MCP for seamless tool integration, and Docker Compose for orchestration, we’ve built more than just a working AI agent — we’ve created a repeatable, production-ready development pattern. This approach removes environment drift, accelerates iteration, and makes it simple to add new capabilities without disrupting existing workflows. Whether you’re experimenting locally or deploying at scale, this setup ensures your agents are reliable, maintainable, and ready to handle real-world demands from day one.
Before vs. After: The Developer Experience
This is a great example- I hadn't heard of Agno before. I've got an Ollama container running locally and was able to figure out how to talk to that rather than OpenAI by changing the model in the agent to use Ollama: agent = Agent( name="Github Repo Summarizer", role="summarize the github repository using available toolkit", tools=[ MCP_TOOLKIT ], model=Ollama(id="gpt-oss:20b", host="http://my-ollama-host:11434"), show_tool_calls=True, debug_mode=True, add_datetime_to_instructions=True ) I obviously had to add Ollama to the imports and put it into the req.txt, but it simply worked. Nice job with the tutorial!
Having an AI agent that can dig through your repo and actually answer questions is like finally having that all-knowing teammate—minus the coffee breaks and ambiguous TODO comments. Rajesh’s setup is the DevOps sidekick we all secretly wish for during sprint planning. For anyone dreaming up even more advanced agent connections, https://www.chat-data.com/ lets you build chatbots that do more than answer questions—they can trigger API actions, fetch details from multiple sources, and even escalate to a human when things get tricky. Whether it’s DevOps, customer support, or internal dashboards, connecting your agent to real business workflows just got a whole lot easier.
This is awesome, but can it handle large repos?