Deployment architectures - Part 2 [AWS]

Deployment architectures - Part 2 [AWS]

For a deeper understanding of the nuances between ALB, NLB, ADC, and API Gateway, you may refer to the first article in this series.

Deployment architectures - Part 1 : WAF, ALB | NLB, ADC | API Gateway

It is a standard request-response flow through a secured and managed API infrastructure. It is typical in modern cloud-native environments. This preferred design puts Load Balancer before WAF and API Gateway for better scalability, routing, and separation of concerns.

Technically it's valid, but it's less typical, less optimal and may reflect a custom or legacy setup. It works if each component is configured properly, but it comes with architectural caveats. It is used in -

  • Legacy systems where API Gateway is fronting older infrastructure

  • Self-managed API Gateways doing more than just HTTP-based API logic

  • Specific edge cases where the API Gateway needs to inspect requests before load balancing

  • Systems where API Gateway is doing authentication before sending to generic load-balanced backend

Approach-1 vs Approach-2

Request Flow (from client to backend)

  1. Client → Amazon CloudFront: The client sends a request, which first hits CloudFront (a CDN).

  2. CloudFront → AWS WAF: CloudFront is integrated with AWS WAF, so the request is inspected by WAF at this layer for threats, rate limiting, SQL injection, etc.

  3. WAF → Amazon Route 53 (DNS resolution): Route 53 resolves the domain name to the appropriate regional Application Load Balancer (ALB) endpoint.

  4. Route 53 → ALB (per region): The request is routed to the ALB in the nearest region (e.g., us-east-1 or eu-west-1), based on latency or geolocation.

  5. ALB → ECS Services (via Target Groups): The ALB forwards the request to Amazon ECS services running on Spot Instances in various availability zones.

CloudFront integrates WAF at the edge, meaning WAF filters traffic before anything reaches the Application Load Balancer or backend infrastructure. In fact, it is a best practice in AWS architecture.

Why AWS WAF at CloudFront (Edge) is Common — and Recommended

  1. Edge-level Protection: CloudFront is a global content delivery network (CDN) that sits at AWS edge locations close to end users. Integrating AWS WAF with CloudFront means malicious traffic is blocked before it even hits your origin infrastructure, reducing load and cost.

  2. Cost Efficiency: Filtering requests at the edge saves bandwidth, prevents unnecessary scaling of backends, and reduces downstream processing. If you only place WAF at the ALB or API Gateway level, all traffic (including malicious) still reaches those services, costing you compute and bandwidth.

  3. Performance: CloudFront caches content, reducing latency for users. WAF rules at the edge minimize the response time for blocked or throttled requests (faster fail/deny response).

  4. Global Reach: CloudFront + WAF lets you enforce security rules consistently across all regions. With ALBs being regional, you'd otherwise need to replicate WAF rules per region (harder to manage and less scalable).

Layered Design - Best Practices

Two Common Deployment Patterns for AWS WAF

Placing AWS WAF at CloudFront (the edge) is not just common — it's the preferred, scalable, and cost-effective pattern in AWS architecture. It allows you to protect your infrastructure globally with consistent security policies, before traffic even hits your ALB or compute services.

The other approach is best for regional apps, internal APIs, lower cost when CDN is unnecessary

API Gateway typically comes after CloudFront (and WAF, if configured).

API Gateway does not connect to ALB or NLB directly. It is a frontend by itself for backend services like Lambda, ECS, EC2, etc.

Two Common Patterns Involving API Gateway

Pattern A: CloudFront + WAF + API Gateway (Serverless / API-first)

This is the most common for public APIs:

  1. Client

  2. CloudFront (CDN & TLS termination)

  3. WAF (attached to CloudFront)

  4. Amazon API Gateway

  5. Backend (e.g., AWS Lambda, ECS, or HTTP endpoint)

  • Cloud-native, serverless

  • Best for REST or GraphQL APIs

  • WAF can be applied at CloudFront or directly on API Gateway (regional)

Note: If using regional API Gateway, you can front it with CloudFront to improve latency and add WAF at the edge.

Pattern B: CloudFront + WAF + ALB/NLB + ECS/EC2 (Web Apps)

Used for traditional web apps or containerized services:

  1. Client

  2. CloudFront

  3. WAF

  4. Application Load Balancer (ALB) or Network Load Balancer (NLB)

  5. ECS / EC2 / App servers

Generally, it doesn't need to — but in some advanced scenarios, we can do this:

Use API Gateway as a Reverse Proxy to ALB/NLB: API Gateway (HTTP API) can forward requests to:

  • ALB (via public HTTP endpoint)

  • NLB (not common unless using VPC Link)

Example:

  • API Gateway (HTTP API) → VPC Link → NLB → ECS/EC2

  • Used when: Backend is in private VPC and we want to expose it through API Gateway with security/auth features (IAM, JWT, etc.)

When to Use API Gateway, ALB, NLB

API Gateway is not designed to connect to ALB or NLB

  • Instead, it’s used as an entry point to APIs and services (like Lambda, ECS, EC2).

  • You can front API Gateway with CloudFront + WAF for global edge protection.

  • You can route CloudFront to API Gateway or CloudFront to ALB, depending on whether your backend is serverless or traditional.

What is a Reverse Proxy?

A reverse proxy is a server that sits in front of backend servers, accepts incoming client requests, and forwards them to those backend servers, then returns the backend response to the client.

In simpler terms:

  • Client thinks it’s talking to one server (the proxy)

  • The proxy internally routes the request to the right backend server

  • Backend is hidden from the outside world

Architecture Flow

You have a private microservice running in a VPC behind an NLB. You want to expose it securely via API Gateway, so:

  • You create a VPC Link (private connection)

  • Set up API Gateway to forward requests to your NLB

  • The client hits API Gateway, but the backend service is protected and never exposed directly

Benefits of using API Gateway as reverse proxy:

  • Security: Backend isn't exposed publicly

  • Auth: Use IAM, JWT, Cognito

  • Rate limiting / throttling

  • Request/response transformation

  • Monitoring and logging via CloudWatch

  • Versioning and routing for microservices

Setup (Simplified):

  1. Create API Gateway (HTTP API)

  2. Create a VPC Link

  3. Set integration target as NLB or ALB endpoint

  4. Define routes like /users, /orders, etc., that point to backend paths

Summary

  1. API Gateway as a reverse proxy lets you front your backend services securely.

  2. It's not directly linked to ALB/NLB, but can forward requests using:

  3. Great for hybrid or microservices where you want API Gateway’s features without exposing your infrastructure directly.

API Gateway → ALB (Public HTTP Integration)

Scenario:

You have a public-facing web service behind an Application Load Balancer (ALB):

  • ALB DNS: app-backend-123456.elb.amazonaws.com

  • App routes: /users, /orders, /products

You want to expose this through Amazon API Gateway with:

  • Custom domain: api.example.com

Flow

How to Set It Up:

1. Create an HTTP API in API Gateway

2. Add routes like:

  • /users → integration target: https://app-backend-123456.elb.amazonaws.com/users

  • /orders → integration target: https://app-backend-123456.elb.amazonaws.com/orders

3. Attach a custom domain: api.example.com

4. Add WAF, Cognito auth, etc., if needed

Why Use API Gateway Here?

  • Add auth (JWT/Cognito) on top of ALB-based apps

  • Obscure internal ALB URL

  • Uniform API entrypoint (api.example.com) even if backends are spread across services

Scenario:

You have an internal microservice (e.g., in ECS or EC2) behind a Network Load Balancer (NLB) inside a private VPC.

  • NLB DNS (private): internal-nlb-123456.elb.internal

  • You want to expose only part of the service externally through API Gateway

  • Custom domain: secure-api.example.com

How to Set It Up:

  • Create a VPC Link in API Gateway (maps to your NLB)

  • Create an HTTP API

  • Add routes like:

  • Secure it with:

  • Add a custom domain (secure-api.example.com)

Why Use API Gateway Here?

  • Keep backend entirely private (not on public internet)

  • Add auth, throttling, logging without changing backend

  • Provide controlled, secure external access to only selected endpoints

So, in essence, API Gateway can act as reverse proxy to forward requests to your ALB / NLB

Is there a need for making an ALB public? Whether or not to expose an ALB publicly depends entirely on how your system is designed to receive traffic.

Risks of a Public ALB (and How to Mitigate)

When to Use a Private ALB (Preferred for APIs)

If your ALB is only meant to be reached by API Gateway, CloudFront, or internal services, then it should be private.

Example Use Case: API Gateway → Private ALB

  • Public API: api.example.com

  • API Gateway receives requests

  • Forwards them to a private ALB

  • ALB is in a private subnet

  • Secured via VPC Link, NACLs, SGs, no public IP

This avoids direct public access to your app layer. Only API Gateway controls exposure.


Great article Vishwambar. Keep writing !!

To view or add a comment, sign in

Others also viewed

Explore topics