Docker Swarm Deep Dive: Orchestration, CI/CD & Cloud

Docker Swarm Deep Dive: Orchestration, CI/CD & Cloud

6: Introduction to Docker Swarm & Basic Cluster Setup

Topics Covered:

✔️ What Docker Swarm is and why it’s useful for container orchestration

✔️ How to set up a Swarm Cluster with manager and worker nodes

✔️ Deploying and managing services in Swarm

✔️ Scaling, rolling updates, and monitoring services


🔹 What is Docker Swarm?

Docker Swarm is a native orchestration tool that manages containerized applications across multiple nodes. It provides:

Clustering – Multiple Docker nodes work together as a unified system

Load Balancing – Traffic is distributed across services automatically

Service Scaling – Easily scale services up or down

Self-Healing – Containers restart if they fail


🔹 Setting Up a Docker Swarm Cluster

Step 1: Initialize Swarm on the Manager Node

docker swarm init --advertise-addr <MANAGER-IP>        

This sets up the manager node and outputs a join token for workers.

Step 2: Add Worker Nodes to the Cluster

On each worker node, run:

docker swarm join --token <TOKEN> <MANAGER-IP>:2377        

Step 3: Check the Cluster Status

On the manager node:

docker node ls        

🔹 Deploying Services in Swarm

Once the cluster is ready, deploy a service (e.g., an Nginx web server):

docker service create --name web --replicas 3 -p 80:80 nginx        

✔️ This runs 3 replicas of Nginx across the Swarm cluster

✔️ The Swarm load balancer distributes requests among them

To list services:

docker service ls        

To check running containers:

docker ps        

🔹 Scaling Services & Performing Rolling Updates

To scale a service:

docker service scale web=5        

✅ Now 5 replicas of Nginx will run instead of 3

To update a service (e.g., change the image version):

docker service update --image nginx:latest web        

Swarm performs a rolling update to avoid downtime


7: Advanced Swarm – Persistent Storage, Networking & Security

Topics Covered:

✔️ Persistent Storage in Swarm for stateful applications

✔️ Advanced Networking – Custom overlay networks, internal DNS, and service discovery

✔️ Load Balancing & Traffic Routing

✔️ Security Best Practices – Encrypting communication, RBAC, and securing Swarm


🔹 Persistent Storage in Docker Swarm

Containers are ephemeral—meaning data is lost when they restart. To persist data across containers, use Docker Volumes or NFS-based storage.

Using Docker Volumes for Persistent Data

1️⃣ Create a volume:

docker volume create mysql_data        

2️⃣ Deploy a MySQL service using that volume:

docker service create --name mysql-db \
  --replicas 1 \
  --mount type=volume,source=mysql_data,target=/var/lib/mysql \
  -e MYSQL_ROOT_PASSWORD=rootpass \
  mysql:5.7        

✅ Data is persisted even if the container restarts


🔹 Advanced Networking in Docker Swarm

By default, Swarm creates an ingress overlay network to allow services to communicate.

Creating a Custom Overlay Network

docker network create -d overlay my-overlay        

Deploy services using this network:

docker service create --name redis --network my-overlay redis
docker service create --name backend --network my-overlay my-app        

✅ Now backend can talk to redis without exposing ports externally


🔹 Swarm Load Balancing & Service Discovery

Swarm automatically load balances requests across replicas. If a user accesses http://<any-node>:8080, Swarm routes the request to a healthy replica.

Deploy a load-balanced service:

docker service create --name web --replicas 3 -p 8080:80 nginx        

🔹 Security Best Practices in Swarm

1️⃣ Enable Encrypted Networks

docker network create --driver overlay --opt encrypted secure-net        

✅ All traffic is now encrypted

2️⃣ Rotate Manager & Worker Tokens Regularly

docker swarm join-token --rotate manager        

✅ Prevents unauthorized nodes from joining

3️⃣ Run Services in Read-Only Mode

docker service create --read-only --name secure-app my-app        

Prevents malware from modifying files


8: Disaster Recovery, CI/CD & Cloud Deployment

Topics Covered:

✔️ Disaster Recovery & Backup Strategies

✔️ Automating Deployments with CI/CD Pipelines

✔️ Deploying Docker Swarm on AWS, Azure, & Google Cloud


🔹 Disaster Recovery in Docker Swarm

Backup Swarm Configuration

tar -czvf swarm_backup.tar.gz /var/lib/docker/swarm        

✅ This stores the Swarm cluster state for quick recovery

Backup Persistent Storage

docker run --rm -v mysql_data:/data -v $(pwd):/backup busybox tar czf /backup/mysql_backup.tar.gz /data        

✅ Prevents data loss in case of node failure


🔹 Automating Deployments with CI/CD

Using GitHub Actions

A GitHub workflow to automatically build & deploy a service to Swarm:

name: Deploy to Swarm
on: push
jobs:
  deploy:
    steps:
    - name: Build & Push Image
      run: |
        docker build -t myrepo/myapp:latest .
        docker push myrepo/myapp:latest

    - name: Deploy to Swarm
      run: |
        ssh user@manager-node "docker service update --image myrepo/myapp:latest my-app-service"        

✅ Ensures zero-downtime deployments


🔹 Deploying Swarm in the Cloud

Setting Up Swarm on AWS

1️⃣ Create EC2 instances

2️⃣ Install Docker

3️⃣ Initialize Swarm & Deploy Services

docker swarm init --advertise-addr <MANAGER-IP>
docker service create --name web --replicas 3 -p 80:80 nginx        

✅ Swarm now runs in the cloud with auto-scaling


📌 Summary of Key Takeaways

✔️ Swarm clusters enable high availability and auto-scaling

✔️ Persistent storage ensures data isn’t lost after restarts

✔️ CI/CD pipelines automate deployments for faster delivery

✔️ Cloud deployment allows Swarm to run at scale on AWS, Azure, & GCP


To view or add a comment, sign in

Others also viewed

Explore topics