Deployment architectures - Part-I : WAF, ALB | NLB, ADC | API Gateway
A load balancer is a system that distributes incoming network traffic across multiple servers to ensure no single server becomes overwhelmed. Its main goal is to improve performance, reliability, and scalability of applications.
Here’s a breakdown of what a load balancer does:
Load balancers can work at different levels:
They are commonly used in web applications to manage high traffic and ensure a smooth user experience.
NLB vs ALB
NLB (Network Load Balancer)
ALB (Application Load Balancer)
NLB is generally faster than ALB in terms of raw performance and latency.
Why NLB is Faster
Use NLB if:
Use ALB if:
TLS Termination vs Passthrough
TLS Termination
It means decrypting TLS-encrypted (HTTPS) traffic at a specific point (like a load balancer or gateway). This point handles the TLS handshake with the client and manages the SSL certificate. After termination, traffic is usually forwarded as plain HTTP (or re-encrypted). It offloads TLS processing from backend servers and enables inspection, routing, and WAF.
TLS Passthrough
It means that encrypted TLS (Transport Layer Security) traffic is forwarded by a load balancer without decrypting it. The load balancer simply passes the encrypted packets directly to the backend server, which is responsible for terminating (decrypting) the TLS connection.
Which Component Should Handle TLS Termination?
It depends on the application's needs - here’s a breakdown:
AWS WAF
API Gateway
Application Load Balancer (ALB)
Network Load Balancer (NLB)
Actual Server
You may choose to terminate TLS here for end-to-end encryption, but:
Recommended Best Practices (AWS)
You can also terminate TLS at one layer and re-encrypt for the next hop — this gives the best of both worlds (inspection + end-to-end encryption).
SSL Acceleration & Offloading
SSL (Secure Sockets Layer), or more accurately TLS (Transport Layer Security), plays a crucial role in ensuring the secure delivery of web applications. It enables authentication; typically between the website and the client, and optionally in the reverse direction and encrypts data transmitted between them.
However, this security comes with a performance cost, as establishing each client session requires significant computational resources. By offloading SSL processing to a load balancer, this burden is removed from the web servers, allowing them to focus on handling core application functions.
Load balancers are well-suited for SSL offloading. Not only does this free up server capacity, but it also enables the load balancer to inspect traffic and enforce security and traffic management policies. Many hardware-based load balancers come equipped with dedicated cryptographic processors designed to handle high volumes of SSL transactions efficiently and safeguard private keys used in secure communications.
SSL relies on the RSA algorithm to authenticate and securely exchange keys between clients and websites. RSA is a mathematical "trapdoor" function that uses a pair of keys: a private key, kept securely on the web server or load balancer, and a public key, which is distributed to clients. The public key is included in a digital certificate, allowing clients to verify the authenticity of the associated private key.
Data encrypted with one key can only be decrypted with the other. This enables the server to prove its identity (by encrypting with its private key, which the client can decrypt with the public key), and allows clients to send encrypted data securely (by encrypting with the public key, which only the server can decrypt using the private key). This method of using two keys is known as asymmetric encryption.
Due to the high computational cost of RSA, it isn’t practical to use it for all communication between the client and server. Instead, RSA is used only during the initial handshake to exchange a one-time session key for a more efficient symmetric encryption algorithm like AES. It’s this initial exchange that places the greatest demand on system resources and benefits most from acceleration and offloading.
Benefits of SSL Acceleration and Offloading
Offloading SSL processing to a load balancer offers several advantages that go beyond performance enhancement:
Another key advantage is centralized management. By handling SSL termination at the load balancer, certificates and private keys are maintained in a single location, rather than on multiple servers. Security and traffic policies can also be applied uniformly from one control point. This simplifies administrative tasks and clearly separates security management from application ownership responsibilities.
Internal ALB / NLB (In AWS)
You can deploy services behind an internal ALB or NLB, which only allows intra-VPC traffic or traffic from peered VPCs.
Example
This gives you better routing, path control, and abstraction for service communication.
Application Delivery Controllers (ADC)
Originally, load balancers were designed to distribute incoming network traffic across multiple servers to ensure no single server became overwhelmed. This improved availability and performance. However, as applications grew more complex and security needs increased, there was a demand for smarter, more functional devices at the edge of the network - closer to where users connect.
The development of Application Delivery Controllers (ADCs) marked a significant evolution beyond traditional load balancing. Modern ADCs not only distribute network traffic evenly across enterprise applications but also enhance their performance, reliability, resource efficiency, and security. Beyond these functions, ADCs accelerate applications, compress and store data, manage traffic flow, switch and multiplex content, and provide security measures for open-source applications.
By applying various optimization techniques, ADCs improve the performance of applications delivered over wide area networks (WANs). One of their core roles is to serve as a centralized point for managing authentication, authorization, and auditing across multiple backend application servers, all within a secured environment.
Layer 7 Security
ADCs function at the application layer (Layer 7) of the OSI model, enabling them to analyze and filter application-specific traffic, such as HTTP requests. This capability allows them to block malicious activities like SQL injection and cross-site scripting (XSS) attacks.
SSL Offloading
ADCs handle the encryption and decryption of HTTPS traffic, reducing the computational burden on backend servers. This enhances overall system performance by offloading resource-intensive SSL/TLS operations.
Caching
To improve response times, ADCs cache frequently accessed content, eliminating the need to retrieve the same data from servers repeatedly.
Application Acceleration and ADC Capabilities
Because ADCs sit between end-user browsers and backend web application servers, they are ideally positioned to collect performance data and implement optimization strategies. Key acceleration features include:
In short, ADCs provide both security and performance optimization for modern web applications.
The reverse proxy referred to utilizing the advanced ADC functionality whether load balancing is needed or not.
ADC vs Load Balancers (ALB, NLB)
Can ADC do Load Balancing ?
An ADC can combine the roles of both:
Some enterprise-grade ADCs even support L3 (IP-level) routing, multi-tenancy, advanced SSL termination, and DDoS protection — all in one box or virtual appliance.
API Gateway
An API Gateway is a tool used to manage and route API requests. An API (Application Programming Interface) enables communication between two applications, typically over HTTP or HTTPS. As applications increasingly relied on APIs for interaction, there emerged a need to manage these APIs efficiently. Developers sought ways to handle common tasks - such as authentication, rate limiting, and SSL encryption/decryption - separately from the core business logic of the application.
This led to the adoption of API gateways: centralized points that expose APIs through a single entryway. These gateways handle cross-cutting concerns and route requests to the appropriate backend services quickly and efficiently.
An API gateway offers a range of services that streamline API development and management. Many of its features overlap with those of Application Delivery Controllers (ADCs), including:
Additionally, some API gateways support protocol translation, enabling the conversion of requests from one protocol to another - for example, converting HTTP requests to gRPC. Another powerful capability is request aggregation, where a single incoming request triggers multiple calls to different backend services, with the gateway combining the results into a single response for the client.
ADCs vs API Gateways
Application Delivery Controllers (ADCs) and API Gateways are not the same, although they share some overlapping functionalities. Here’s a clear breakdown of how they differ and where they overlap:
Similarities (Overlapping Functions):
Both ADCs and API Gateways can perform:
These shared capabilities are what often cause confusion.
Key Differences
Will ADCs become Obsolete ?
Though, it may appear like ADCs are getting obsolete but reality is that they are not becoming obsolete, but their role is evolving in today's cloud-native and microservices-driven environments.
How the ADC Role is Changing
[1] From Hardware to Software and Cloud
[2] Less Central, More Distributed
[3] Overlap with API Gateways
Where ADCs Are Still Strong
When ADCs Might Be Replaced
In purely cloud-native or serverless architectures, ADCs may be supplemented or replaced by:
In summary, ADCs are not obsolete, but they're no longer the only (or always the best) choice. They are:
If you're building a greenfield cloud-native architecture, an API gateway or service mesh might be your first choice. But in complex enterprises with hybrid or legacy systems, ADCs still play a crucial role.
ADCs Are Useful in Hybrid Deployments
Hybrid cloud means you're running applications across both on-premises infrastructure and public cloud services. In such environments, ADCs can help with:
Reverse Proxy
What is a Reverse Proxy?
A reverse proxy, i.e. nginx, is a server that sits in front of backend servers, accepts incoming client requests, and forwards them to those backend servers, then returns the backend response to the client. In simpler terms:
Reverse Proxy vs Forward Proxy
ADCs vs Reverse Proxies
Reverse Proxy: A server that sits in front of one or more web servers and forwards client requests to them. It hides the backend servers from the client and can provide additional features like security and caching.
The reverse proxy referred to utilizing the advanced ADC functionality whether load balancing is needed or not.
This means that reverse proxies - which traditionally serve as intermediaries between clients (like browsers) and backend servers - are now increasingly using the advanced features of Application Delivery Controllers (ADCs), even if load balancing (i.e., distributing traffic among multiple servers) is not required.
Key Idea
Even when there is only one backend server (so no load balancing is needed), organizations still deploy ADCs as reverse proxies to take advantage of these other benefits. For example:
In Summary
A reverse proxy using ADC functionality is not limited to load balancing. It's about leveraging all the other performance, security, and optimization features that an ADC offers - making it a valuable part of the infrastructure even for single-server applications.
How API Gateway Acts as a Reverse Proxy to ALB/NLB
You can configure API Gateway (HTTP API or REST API) to act as a reverse proxy, like this:
Architecture Flow
Use Case Example
You have a private microservice running in a VPC behind an NLB. You want to expose it securely via API Gateway, so:
Benefits of using API Gateway as reverse proxy
Setup
In summary, API Gateway as a reverse proxy lets you front your backend services securely.
What is a WAF and What Does It Protect Against?
A Web Application Firewall (WAF) is a security system that monitors, filters, and blocks HTTP/HTTPS traffic to and from a web application to protect it from common attacks.It operates at Layer 7 (Application Layer) of the OSI model, focusing specifically on web application threats.
WAFs defend against a range of OWASP (Open Worldwide Application Security Project) Top 10 web vulnerabilities, including:
How Does a WAF Work?
A WAF sits between users (clients) and the web application. It acts like a reverse proxy, inspecting incoming and outgoing traffic and applying rules to:
Types of WAFs:
In summary, a WAF protects web applications by filtering and monitoring HTTP traffic and blocking common attacks like XSS, SQL injection, and more — helping to secure applications at the edge before threats reach your backend.
Can ADC Be Between WAF and API Gateway ?
Internet --> WAF --> ADC --> API Gateway --> Backend Services
An ADC can sit between WAF and API Gateway, depending on your requirements. Use this design when you need more performance optimization or HA than a pure API Gateway can offer. Here’s what each component typically does:
Alternative architectures:
Who Terminates SSL in such Architectures
Client → TLS → WAF (decrypts) → HTTP or TLS → ADC → API Gateway → App Servers
There are a few common patterns here:
1. WAF terminates TLS, forwards plain HTTP to ADC
2. WAF terminates TLS, re-encrypts to ADC (TLS bridging)
3. WAF passes encrypted traffic to ADC (no TLS termination)
Choosing AWS WAF vs Cloudfare WAF
Choosing between AWS WAF and Cloudflare WAF depends on your infrastructure, application architecture, performance needs, and security priorities.
When to Use AWS WAF
When to Use Cloudflare WAF
CDNs: CloudFront vs Cloudflare
CloudFront and Cloudflare are often compared because both are Content Delivery Networks (CDNs) and offer security, performance, and edge services — but they have different strengths, and choosing between them depends on your use case, architecture, and budget.
When to Use CloudFront
When to Use Cloudflare
Edge Computing
Lambda@Edge
It allows you to customize content delivery and behavior at the edge — before requests reach your backend or before responses reach the client. You can use it to:
Cloudflare Workers
It is Cloudflare’s serverless platform that allows you to write and run JavaScript, TypeScript, or WASM code directly at Cloudflare's edge network, across 300+ global locations — without provisioning servers.
They allow you to intercept, modify, and respond to HTTP requests at the edge — right as they enter Cloudflare’s network — to create fast, programmable, globally distributed applications and APIs. It works like,
Advantages
QA Architect | Product Manager | Driving Quality, Innovation & Scalable Product Delivery
2moDefinitely worth reading