FinBankOps: Secure, Multi-Region Kubernetes Infrastructure for Fintech on AWS

FinBankOps: Secure, Multi-Region Kubernetes Infrastructure for Fintech on AWS

Scope and Objectives 

Design and implement a production-grade, secure, and observable Kubernetes infrastructure on AWS using EKS. The platform supports: 

  • Multi-region high availability (HA) 

  • Zero-downtime blue/green deployments 

  • PCI-DSS-aligned cloud security controls 

  • GitOps-based continuous deployment via ArgoCD 

  • Secret isolation via External Secrets Operator

  • Prometheus/Grafana observability and CloudWatch logging 

  • Compliance posture scanning with kube-bench and kubescape 

  • Internet-facing exposure through Load Balancers (ALB + Istio Ingress Gateway) 

  • Containerized microservices hosted in Amazon ECR 

Project Repo:

https://github.com/Tosin-STIL/eks-microservice-demo

Tools & Services

Phase 1: Clone and Prepare the Microservices Application 

To establish a clean, AWS-native microservices base for this infrastructure project, we leverage the official AWS EKS Microservice Demo: a simulated e-commerce application that includes services like catalog, frontend, and detail, intended to demonstrate container orchestration on EKS using modern DevOps patterns. While the original repository includes CI/CD pipelines, deployment automation (e.g., Spinnaker), and ancillary infrastructure, our goal is to sanitize the repository to focus purely on the service layer for GitOps-based Kubernetes deployment. 

Accordingly, we will remove all non-essential directories — including CI/CD tooling (spinnaker/), progressive delivery examples (flagger/), infrastructure provisioning templates, and redundant documentation — yielding a minimal yet production-ready microservices codebase. 

Step 1.1: Clone the AWS EKS Microservice Demo App 

Step 1.2: Remove Unnecessary Directories and Files 

This leaves us with only the apps/ directory, containing the frontend and detail services, including version subfolders. 

Step 1.3: Clean and Update .gitignore 

Ensure the .gitignore file excludes platform artifacts, Node modules, and transient directories: 

Step 1.4: Initialize New Git Repository and Push to GitHub 

Now we have a streamlined microservices repository ready to serve as the workload layer for the upcoming infrastructure provisioning and GitOps-driven deployment using ArgoCD and EKS. 

Phase 2: Provision Multi-Region EKS Clusters 

Pre-Phase 2: Terraform Backend Setup 

Step 0.1: Create the S3 Bucket for Remote State 

enable versioning for audit and rollback purposes: 

Step 0.2: Create the DynamoDB Table for State Locking: 

Step 0.3: Add Backend Configuration to Terraform 

Create a backend.tf file in the root of your infrastructure folder 

Inside infra, create the backend config backend.tf

Initialize the backend: 

Once initialized, now we can begin scaffolding the modules (VPC, EKS, etc.) inside infra/modules/.

Step 2.1: Terraform Deployment 

In this phase, we will be doing the folowing: 

  • Define modules: vpc, eks, iam, alb, node_groups, asg 

  • Create two clusters: us-east-1 and us-west-2 

  • Enable Control Plane Logging, Cluster Autoscaler 

  • Separate Node Groups: stateless-node-group , dbproxy-node-group  and batch-node-group 

Infra/providers.tf:

Infra/variables.tf: 

Create Module Folders: 

find the codes here:

https://github.com/Tosin-STIL/eks-microservice-demo/tree/main/infra/modules

Infra/main.tf: 

EXECUTION STEPS 

Step A: Initialize Terraform: 

Step B: Validate Configuration 

Step C: Plan Deployment 

Step D: Apply the Infrastructure: 

Phase 3: Containerization and Image Deployment 

This phase involves containerizing the application's backend and frontend components, pushing the Docker images to Amazon Elastic Container Registry (ECR), and preparing them for deployment into the EKS cluster. 

Step 3.1: Authenticate Docker to Amazon ECR 

Before pushing any image, authenticate your Docker client with ECR for each region: 

Do this once for each region (us-east-1 and us-west-2) where images will be pushed. 

Step 3.2: Containerize the Backend Service (apps/detail/app.js) 

Create Amazon ECR repository created in both us-east-1 and us-west-2 for the detail backend service. 

Build the image locally: 

Tag the image: 

For us-east-1: 

For us-west-2: 

You can confirm the image has been pushed by running: 

Step 3.3: Containerize the Frontend Service (apps/frontend/server.js) 

First, create a repository named frontend in both us-east-1 and us-west-2 regions. 

Build the image locally: 

Tag and Push to Amazon ECR: 

For us-east-1: 

For us-west-2: 

After pushing, confirm with: 

Phase 4: Istio Setup for Traffic Management 

Step 1: Ensure you're targeting the correct cluster: 

1. Run for us-east-1 first: 

Then verify: 

2. Download and install Istio CLI 

3. Install Istio into the default namespace 

4. Enable automatic sidecar injection: 

5. Verify istio is installed: 

Step 4.2: Ingress Setup 

Create detail.yaml in the apps/ directory: 

2. Create frontend.yaml in the same directory:

Apply Both: 

Then confirm with: 

Step 4.2: Ingress Setup 

We’ll expose both: 

  • frontend on / 

  • detail on /detail 

Gateway.yaml

Virtualservice.yaml:

Apply Both Resources: 

Load the web address in a browser: 

http://a190eba858c0341daa2a3f7ec5b8e563-329820499.us-east-1.elb.amazonaws.com/ 

Phase 5: GitOps CD with ArgoCD 

Step 5.1: Install ArgoCD 

Verify: 

Port Forward ArgoCD UI 

To access the ArgoCD UI locally: 

Then visit:  https://localhost:8080 

Initial login: 

  • Username: admin 

  • Password: the name of the argocd-server pod: 

Retrieve password:

Username: admin 

Password: XXXXXXXXXXXX

Step 5.2: Configure Application YAMLs 

We'll build the following folder structure per service (e.g., for frontend): 

Each overlay will reference base/ but apply environment-specific patches like image tags, replica counts, or labels. 

Find the codes here: 

 Next define an ArgoCD Application:

Apply the Application Manifest: 

Also, lets create the ArgoCD Application Manifest for detail-dev: 

apps/argocd/detail-dev-app.yaml

Apply: 

Verify in ArgoCD UI 

Open https://localhost:8080, and you should now see: 

frontend-dev and detail-dev 

Step 5.3: Blue/Green Deployments 

This step introduces zero-downtime deployment orchestration using: 

  • Argo Rollouts for deployment control (progressive rollout, pause/resume, rollback) 

  • Istio VirtualService for intelligent traffic routing between versions (blue/green) 

Required: 

Directory Structure Plan: 

1. Replace deployment.yaml with rollout.yaml in apps/detail/base/rollout.yaml 

2. Create Services: detail and detail-preview:

apps/detail/base/service.yaml

3. Update kustomization.yaml 

4. Patch Image in patch-rollout.yaml 

5. Update VirtualService

In apps/virtualservice.yaml, modify routes to:

6. Install CLI Dashboard 

Then: 

Check kubectl argo rollouts dashboard at  http://localhost:3100/rollouts

Back to application details on Argocd: 

Phase 6: External Secrets Management 

Step 6.1: Install ESO 

Verify deployment:

Phase 7: Security Hardening & Compliance 

7.1 Lets Run kube-bench 

Apply the Kube-Bench Job Manifest: 

Retrieve Scan Logs: 

I got couple of interesting logs: 

There are many scored and unscored checks: failed, warn, pass — I will store this as audit evidence. 

Step 7.2: Install & Run kubescape: 

Kubescape scans the cluster for NSA/CISA, MITRE ATT&CK, and other frameworks. 

Step 1: Download the correct binary 

Step 2: Make it executable 

Step 3: Move it to a location in your PATH 

Step 4: Verify installation 

Run a Compliance Scan Using the NSA Framework (v3+ syntax): 

I got some interesting results: 

Resource Summary      

│                          Failed Resources : 44                          │ 

│                         All Resources    : 253                          │ 

│                     % Compliance-Score    : 72.72%   

scan with full verbosity and output in a format (e.g., JSON or HTML): 

You can find it here:  https://github.com/Tosin-STIL/eks-microservice-demo/blob/main/nsa-report.html

To See All Available Frameworks: 

To Run a Scan with MITRE: 

You can find the full report here:  https://github.com/Tosin-STIL/eks-microservice-demo/blob/main/mitre-report.html

Phase 8: Observability and Logging 

Step 8.1: Install Prometheus and Grafana 

Run the following to install the Prometheus Operator and bundled Grafana: 

This will create Prometheus, Alertmanager, Grafana, node exporters, and all associated CRDs under the monitoring namespace. 

Step 8.2: Access Grafana 

Get Grafana admin password: 

Then port-forward to access Grafana locally: 

Step 8.3: Dashboards to Import 

Once inside Grafana: 

  • Go to Dashboards → Import 

  • Use the following IDs from Grafana.com

Dashboard for Kube Deployment Status with import ID: 11074:

Dashboard for Node exporter with import ID: 1860: 

Dashboard for Pod resources with import ID: 6417: 

Other dashboard created: 

Kubernetes / API server: 

Kubernetes / Compute Resources / Workload:

Kubernetes / Networking / Cluster:

Kubernetes / Networking / Namespace (Pods): 

Conclusion: FinBankOps Kubernetes Infrastructure 

As we bring the FinBankOps project to a close, we’ve successfully designed and implemented a robust, production-grade Kubernetes infrastructure tailored for fintech workloads on AWS. Over eight meticulously structured phases, we engineered a secure, observable, and scalable platform that adheres to modern DevSecOps and GitOps principles. 

🔹 Phase 1: Application Preparation 

We began by cloning and sanitizing the AWS EKS Microservices Demo application, stripping away non-essential pipeline and infrastructure directories to yield a clean, platform-agnostic codebase. This became the foundation for GitOps-driven deployments. 

🔹 Phase 2: GitOps Bootstrapping 

ArgoCD was deployed and configured to manage continuous delivery across microservices. Argo Applications were created to track and reconcile the state of our services declaratively via Git. 

🔹 Phase 3: Istio Service Mesh Integration 

We installed Istio as our service mesh and configured Gateway, VirtualService, and DestinationRule resources for intelligent routing. This allowed for advanced traffic management and in-mesh observability. 

🔹 Phase 4: Namespaces and Multi-Tenancy 

Service-specific namespaces were provisioned, enabling logical separation and future support for multi-tenancy and policy enforcement boundaries. 

🔹 Phase 5: Canary and Blue/Green Deployments 

Argo Rollouts was integrated with Istio routing to enable progressive delivery strategies. We configured rollout strategies and traffic shifting to support zero-downtime updates and rollback safety. 

🔹 Phase 6: Secrets Management 

External Secrets Operator (ESO) was installed to support future integration with AWS Secrets Manager, laying the groundwork for centralized, secure secret synchronization across services. 

🔹 Phase 7: Security and Compliance 

We installed kube-bench and kubescape to perform CIS benchmark and NSA framework compliance scans. These scans provided insight into security posture and reinforced adherence to best practices. 

🔹 Phase 8: Observability and Logging 

To ensure deep visibility, we deployed the kube-prometheus-stack, which bundled Prometheus, Grafana, Alertmanager, and exporters. Multiple Grafana dashboards were configured to monitor deployments, resource usage, and service health—establishing a clear window into cluster operations. 

This is a disciplined execution of Kubernetes architectural principles—from secure provisioning and declarative deployment to progressive delivery and real-time observability. The resulting infrastructure is resilient, auditable, and ready to support the evolving needs of a production-grade fintech application. 

🧠 If you're building something ambitious and need help with infrastructure, GitOps, DevSecOps, or secure cloud-native design—let's talk.  🤝 Open to collaborations, consulting, and freelance gigs—especially where AWS, Kubernetes, and CI/CD automation intersect. 

📩 DMs are open, or feel free to connect directly. 

To view or add a comment, sign in

Others also viewed

Explore topics