Deployment architectures - Part-I : WAF, ALB | NLB, ADC | API Gateway

Deployment architectures - Part-I : WAF, ALB | NLB, ADC | API Gateway

A load balancer is a system that distributes incoming network traffic across multiple servers to ensure no single server becomes overwhelmed. Its main goal is to improve performance, reliability, and scalability of applications.

Article content
Load Balancer

Here’s a breakdown of what a load balancer does:

  • Distributes traffic evenly among available servers.
  • Improves availability by rerouting traffic if one server fails.
  • Enhances scalability by allowing new servers to be added easily.
  • Optimizes performance by directing traffic to the least busy or fastest-responding server.

Load balancers can work at different levels:

  • Layer 4 (Transport Layer): Routes traffic based on IP address and TCP/UDP ports.
  • Layer 7 (Application Layer): Makes routing decisions based on application-level data (like URLs or headers).

They are commonly used in web applications to manage high traffic and ensure a smooth user experience.

NLB vs ALB        

NLB (Network Load Balancer)

  • NLB operates at Layer 4 (Transport Layer) of the OSI model.
  • It routes traffic based on IP protocol data, not application content.
  • Designed for ultra-high performance, handling millions of requests per second.
  • Supports TCP, UDP, and TLS passthrough for low-latency workloads.
  • Ideal for applications needing static IPs, high availability, and low latency.

ALB (Application Load Balancer)

  • ALB operates at Layer 7 (Application Layer) of the OSI model.
  • It routes traffic based on application-level content like URLs, headers, and hostnames.
  • Supports advanced routing features, including path-based and host-based routing.
  • Works best for HTTP and HTTPS applications, with native WAF and SSL termination.
  • Ideal for microservices, containerized apps, and modern web applications.

Article content
NLB vs ALB

NLB is generally faster than ALB in terms of raw performance and latency.

Why NLB is Faster

  1. Lower-Level Operation: NLB works at Layer 4 (TCP/UDP) and forwards packets without inspecting the content. This results in minimal overhead and faster performance.
  2. Static IP Support: NLB can use a static IP, reducing DNS resolution times in some scenarios.
  3. Higher Throughput: NLB handles millions of requests per second, designed for extreme performance.

Article content

Use NLB if:

  • You need ultra-low latency.
  • You're dealing with non-HTTP protocols (TCP, UDP, TLS passthrough).
  • You want a static IP or Elastic IP.

Use ALB if:

  • You need intelligent routing (e.g., by path, header, or host).
  • You're using HTTP/HTTPS and need WAF, redirect rules, etc.
  • You're deploying microservices or containers with dynamic ports.

TLS Termination vs Passthrough        

TLS Termination

It means decrypting TLS-encrypted (HTTPS) traffic at a specific point (like a load balancer or gateway). This point handles the TLS handshake with the client and manages the SSL certificate. After termination, traffic is usually forwarded as plain HTTP (or re-encrypted). It offloads TLS processing from backend servers and enables inspection, routing, and WAF.

  • TLS (HTTPS) is decrypted at a specific component (e.g., a load balancer, API gateway).
  • After decryption, the traffic is usually forwarded in plain HTTP (or re-encrypted) to the backend.
  • That component becomes responsible for SSL certificate management and handling TLS handshakes.

TLS Passthrough

It means that encrypted TLS (Transport Layer Security) traffic is forwarded by a load balancer without decrypting it. The load balancer simply passes the encrypted packets directly to the backend server, which is responsible for terminating (decrypting) the TLS connection.

  • TLS is not decrypted at the intermediate layer.
  • It is simply forwarded as-is to the backend server, which then handles decryption.
  • Certificates and TLS logic live on the actual application server.

Article content
TLS Termination vs Passthrough

Which Component Should Handle TLS Termination?

It depends on the application's needs - here’s a breakdown:

AWS WAF

  • Can only inspect decrypted (Layer 7) traffic, so requires TLS to be terminated before reaching WAF.
  • Use with: API Gateway or ALB (not NLB).

API Gateway

  • Good for public-facing APIs.
  • Handles TLS termination, throttling, auth, logging, WAF.
  • Best when: You want full API control, monitoring, and managed TLS.

Application Load Balancer (ALB)

  • Terminates TLS and routes based on HTTP headers, paths, etc.
  • Supports WAF, authentication, and redirect logic.
  • Best when: You want smart L7 routing and integration with other AWS services.

Network Load Balancer (NLB)

  • Works at Layer 4 (TCP).
  • Does NOT terminate TLS by default (unless you use NLB + TLS listener).
  • Can do TLS passthrough to backend, or terminate TLS if configured with a certificate.
  • Best when: You need very low latency, high throughput, or preserve end-to-end TLS.

Actual Server

You may choose to terminate TLS here for end-to-end encryption, but:

  • Adds CPU overhead.
  • Prevents inspection by WAF, ALB, etc.
  • Often needed for regulatory or compliance (e.g., HIPAA, PCI).

Recommended Best Practices (AWS)

Article content

You can also terminate TLS at one layer and re-encrypt for the next hop — this gives the best of both worlds (inspection + end-to-end encryption).

SSL Acceleration & Offloading        

SSL (Secure Sockets Layer), or more accurately TLS (Transport Layer Security), plays a crucial role in ensuring the secure delivery of web applications. It enables authentication; typically between the website and the client, and optionally in the reverse direction and encrypts data transmitted between them.

However, this security comes with a performance cost, as establishing each client session requires significant computational resources. By offloading SSL processing to a load balancer, this burden is removed from the web servers, allowing them to focus on handling core application functions.

Load balancers are well-suited for SSL offloading. Not only does this free up server capacity, but it also enables the load balancer to inspect traffic and enforce security and traffic management policies. Many hardware-based load balancers come equipped with dedicated cryptographic processors designed to handle high volumes of SSL transactions efficiently and safeguard private keys used in secure communications.

SSL relies on the RSA algorithm to authenticate and securely exchange keys between clients and websites. RSA is a mathematical "trapdoor" function that uses a pair of keys: a private key, kept securely on the web server or load balancer, and a public key, which is distributed to clients. The public key is included in a digital certificate, allowing clients to verify the authenticity of the associated private key.

Data encrypted with one key can only be decrypted with the other. This enables the server to prove its identity (by encrypting with its private key, which the client can decrypt with the public key), and allows clients to send encrypted data securely (by encrypting with the public key, which only the server can decrypt using the private key). This method of using two keys is known as asymmetric encryption.

Due to the high computational cost of RSA, it isn’t practical to use it for all communication between the client and server. Instead, RSA is used only during the initial handshake to exchange a one-time session key for a more efficient symmetric encryption algorithm like AES. It’s this initial exchange that places the greatest demand on system resources and benefits most from acceleration and offloading.

Benefits of SSL Acceleration and Offloading

Offloading SSL processing to a load balancer offers several advantages that go beyond performance enhancement:

  • Web Application Firewall (WAF): Allows inspection of incoming client requests for malicious content that could compromise server security.
  • Authentication: Enables client identity verification before granting access to protected web resources.
  • Content Rewriting: Supports the modification of server responses to obscure URLs or correct issues related to hardcoded elements in published applications.
  • Content Inspection: Blocks the transfer of specific content types based on patterns, such as file extensions.
  • Content-Based Routing: Directs traffic according to content type—for example, routing image requests to servers optimized for media delivery.
  • Caching: Frequently requested web content can be cached at the load balancer, reducing the load on backend servers and improving response times.
  • Re-encryption: Traffic can be re-encrypted before being sent to backend servers, enhancing security within the internal network.

Another key advantage is centralized management. By handling SSL termination at the load balancer, certificates and private keys are maintained in a single location, rather than on multiple servers. Security and traffic policies can also be applied uniformly from one control point. This simplifies administrative tasks and clearly separates security management from application ownership responsibilities.

Internal ALB / NLB (In AWS)        

You can deploy services behind an internal ALB or NLB, which only allows intra-VPC traffic or traffic from peered VPCs.

Example

  • Service A wants to call service B
  • Service B is behind an internal ALB
  • Service A calls http://internal-service-b.myapp.local

This gives you better routing, path control, and abstraction for service communication.
Application Delivery Controllers (ADC)        

Originally, load balancers were designed to distribute incoming network traffic across multiple servers to ensure no single server became overwhelmed. This improved availability and performance. However, as applications grew more complex and security needs increased, there was a demand for smarter, more functional devices at the edge of the network - closer to where users connect.

The development of Application Delivery Controllers (ADCs) marked a significant evolution beyond traditional load balancing. Modern ADCs not only distribute network traffic evenly across enterprise applications but also enhance their performance, reliability, resource efficiency, and security. Beyond these functions, ADCs accelerate applications, compress and store data, manage traffic flow, switch and multiplex content, and provide security measures for open-source applications.

Article content

By applying various optimization techniques, ADCs improve the performance of applications delivered over wide area networks (WANs). One of their core roles is to serve as a centralized point for managing authentication, authorization, and auditing across multiple backend application servers, all within a secured environment.

Layer 7 Security

ADCs function at the application layer (Layer 7) of the OSI model, enabling them to analyze and filter application-specific traffic, such as HTTP requests. This capability allows them to block malicious activities like SQL injection and cross-site scripting (XSS) attacks.

SSL Offloading

ADCs handle the encryption and decryption of HTTPS traffic, reducing the computational burden on backend servers. This enhances overall system performance by offloading resource-intensive SSL/TLS operations.

Caching

To improve response times, ADCs cache frequently accessed content, eliminating the need to retrieve the same data from servers repeatedly.

Application Acceleration and ADC Capabilities

Because ADCs sit between end-user browsers and backend web application servers, they are ideally positioned to collect performance data and implement optimization strategies. Key acceleration features include:

  • Traffic Optimization: ADCs use data compression and content caching to minimize data size and avoid reprocessing repeat requests - such as fetching JavaScript files - thus improving responsiveness.
  • Blue/Green Deployments: In DevOps workflows, ADCs facilitate zero-downtime updates by directing and monitoring traffic between old (blue) and new (green) deployments based on specific criteria such as traffic percentage, geographic location, or IP ranges.
  • SSL/TLS Offloading: Offloading SSL/TLS processing from web servers to ADCs, which often include dedicated encryption hardware, greatly enhances performance and server efficiency. ADCs also support modern encryption standards like Elliptic Curve Cryptography (ECC), AES-NI, GCM, and Perfect Forward Secrecy (PFS) using Elliptic Curve Diffie-Hellman Exchange (ECDHE). Additional techniques like SSL termination, bridging, proxying, and session reuse further contribute to performance improvements.
  • Metrics Collection: ADCs provide a wealth of actionable metrics, including real-time traffic data, security incident logs, server performance indicators, and system health diagnostics. These insights are invaluable for troubleshooting and optimizing system performance.

Article content
WAF & ADC

In short, ADCs provide both security and performance optimization for modern web applications.

Article content
ADC

The reverse proxy referred to utilizing the advanced ADC functionality whether load balancing is needed or not.

ADC vs Load Balancers (ALB, NLB)        

Can ADC do Load Balancing ?

An ADC can combine the roles of both:

  • ALB (Application Load Balancer) – which operates at Layer 7 (application layer), and
  • NLB (Network Load Balancer) – which operates at Layer 4 (transport layer).

Some enterprise-grade ADCs even support L3 (IP-level) routing, multi-tenancy, advanced SSL termination, and DDoS protection — all in one box or virtual appliance.

Article content
ADC vs ALB vs NLB


API Gateway        

An API Gateway is a tool used to manage and route API requests. An API (Application Programming Interface) enables communication between two applications, typically over HTTP or HTTPS. As applications increasingly relied on APIs for interaction, there emerged a need to manage these APIs efficiently. Developers sought ways to handle common tasks - such as authentication, rate limiting, and SSL encryption/decryption - separately from the core business logic of the application.

This led to the adoption of API gateways: centralized points that expose APIs through a single entryway. These gateways handle cross-cutting concerns and route requests to the appropriate backend services quickly and efficiently.

Article content
API Gateway

An API gateway offers a range of services that streamline API development and management. Many of its features overlap with those of Application Delivery Controllers (ADCs), including:

  • Request authentication
  • Request filtering and routing
  • SSL termination
  • Data caching
  • Rate limiting
  • Logging and performance metrics

Additionally, some API gateways support protocol translation, enabling the conversion of requests from one protocol to another - for example, converting HTTP requests to gRPC. Another powerful capability is request aggregation, where a single incoming request triggers multiple calls to different backend services, with the gateway combining the results into a single response for the client.

ADCs vs API Gateways        

Application Delivery Controllers (ADCs) and API Gateways are not the same, although they share some overlapping functionalities. Here’s a clear breakdown of how they differ and where they overlap:

Similarities (Overlapping Functions):

Both ADCs and API Gateways can perform:

  • SSL/TLS termination
  • Load balancing or traffic routing
  • Caching
  • Authentication and authorization
  • Rate limiting
  • Logging and metrics collection

These shared capabilities are what often cause confusion.

Key Differences

Article content
ADCs vs API Gateways
Will ADCs become Obsolete ?        

Though, it may appear like ADCs are getting obsolete but reality is that they are not becoming obsolete, but their role is evolving in today's cloud-native and microservices-driven environments.

How the ADC Role is Changing

[1] From Hardware to Software and Cloud

  • Traditional ADCs were hardware-based, deployed in data centers.
  • Now, many vendors offer virtual ADCs, cloud-native ADCs, or containerized versions to support hybrid and multi-cloud environments.
  • ADCs are being integrated into Kubernetes, service meshes, and cloud platforms.

[2] Less Central, More Distributed

  • In microservices and containerized environments, API Gateways and service meshes (like Istio) often take over Layer 7 (application layer) tasks.
  • However, ADCs are still valuable at the edge, in north-south traffic management, or for legacy monolithic applications.

[3] Overlap with API Gateways

  • API Gateways excel in API-specific features (e.g., versioning, protocol translation), while ADCs are broader in scope (e.g., handling TCP/UDP, SSL offload, WAF, GSLB).
  • Some modern ADCs are adding API Gateway features to stay relevant and competitive.

Where ADCs Are Still Strong

  • Global server load balancing (GSLB)
  • SSL/TLS offloading with advanced cryptographic support
  • Web Application Firewall (WAF) integration
  • Traffic shaping, compression, and caching
  • High-performance edge security and acceleration
  • Legacy app environments where microservices aren't used

When ADCs Might Be Replaced

In purely cloud-native or serverless architectures, ADCs may be supplemented or replaced by:

  • API Gateways (e.g., Kong, Apigee, AWS API Gateway)
  • Service Meshes (e.g., Istio, Linkerd, Consul)
  • Native cloud load balancers (e.g., AWS ELB/ALB, Azure Load Balancer)

In summary, ADCs are not obsolete, but they're no longer the only (or always the best) choice. They are:

  • Essential in traditional and hybrid environments
  • Adapting to cloud-native and containerized ecosystems
  • Complementary to API gateways and service meshes, not direct replacements

If you're building a greenfield cloud-native architecture, an API gateway or service mesh might be your first choice. But in complex enterprises with hybrid or legacy systems, ADCs still play a crucial role.

ADCs Are Useful in Hybrid Deployments        

Hybrid cloud means you're running applications across both on-premises infrastructure and public cloud services. In such environments, ADCs can help with:

  • Consistent Traffic Management Across Environments: ADCs provide a unified control layer for traffic distribution, SSL offload, caching, and more — whether traffic is on-prem or in the cloud.
  • Secure Application Delivery: Centralized SSL/TLS management, Web Application Firewall (WAF) features, and Layer 7 filtering offer enterprise-grade security — useful in both parts of the hybrid environment.
  • Legacy + Modern Coexistence: Enterprises often have legacy apps in data centers and modern apps in the cloud. ADCs can efficiently handle both types, bridging the gap.
  • Global Load Balancing and Failover: ADCs can manage global server load balancing (GSLB) and failover between regions or clouds, which is hard to replicate using native cloud tools alone.

Reverse Proxy        

What is a Reverse Proxy?

A reverse proxy, i.e. nginx, is a server that sits in front of backend servers, accepts incoming client requests, and forwards them to those backend servers, then returns the backend response to the client. In simpler terms:

  • Client thinks it’s talking to one server (the proxy)
  • The proxy internally routes the request to the right backend server
  • Backend is hidden from the outside world

Reverse Proxy vs Forward Proxy

Article content
Reverse Proxy vs Forward Proxy
ADCs vs Reverse Proxies        

Reverse Proxy: A server that sits in front of one or more web servers and forwards client requests to them. It hides the backend servers from the client and can provide additional features like security and caching.

The reverse proxy referred to utilizing the advanced ADC functionality whether load balancing is needed or not.

This means that reverse proxies - which traditionally serve as intermediaries between clients (like browsers) and backend servers - are now increasingly using the advanced features of Application Delivery Controllers (ADCs), even if load balancing (i.e., distributing traffic among multiple servers) is not required.

Key Idea

Even when there is only one backend server (so no load balancing is needed), organizations still deploy ADCs as reverse proxies to take advantage of these other benefits. For example:

  • To offload SSL processing and reduce server load
  • To filter out malicious requests at Layer 7
  • To accelerate application delivery through compression or caching
  • To centralize authentication and monitoring

In Summary

A reverse proxy using ADC functionality is not limited to load balancing. It's about leveraging all the other performance, security, and optimization features that an ADC offers - making it a valuable part of the infrastructure even for single-server applications.

How API Gateway Acts as a Reverse Proxy to ALB/NLB        

You can configure API Gateway (HTTP API or REST API) to act as a reverse proxy, like this:

Architecture Flow

Article content

Use Case Example

You have a private microservice running in a VPC behind an NLB. You want to expose it securely via API Gateway, so:

  • You create a VPC Link (private connection)
  • Set up API Gateway to forward requests to your NLB
  • The client hits API Gateway, but the backend service is protected and never exposed directly


Benefits of using API Gateway as reverse proxy

  • Security: Backend isn't exposed publicly
  • Auth: Use IAM, JWT, Cognito
  • Rate limiting / throttling
  • Request/response transformation
  • Monitoring and logging via CloudWatch
  • Versioning and routing for microservices

Setup

  1. Create API Gateway (HTTP API)
  2. Create a VPC Link
  3. Set integration target as NLB or ALB endpoint
  4. Define routes like /users, /orders, etc., that point to backend paths

In summary, API Gateway as a reverse proxy lets you front your backend services securely.

  • It's not directly linked to ALB/NLB, but can forward requests using:
  • Great for hybrid or microservices where you want API Gateway’s features without exposing your infrastructure directly.

What is a WAF and What Does It Protect Against?        

A Web Application Firewall (WAF) is a security system that monitors, filters, and blocks HTTP/HTTPS traffic to and from a web application to protect it from common attacks.It operates at Layer 7 (Application Layer) of the OSI model, focusing specifically on web application threats.

WAFs defend against a range of OWASP (Open Worldwide Application Security Project) Top 10 web vulnerabilities, including:

  • SQL Injection
  • Cross-Site Scripting (XSS)
  • Cross-Site Request Forgery (CSRF)
  • File inclusion attacks
  • Remote code execution
  • Broken authentication or session hijacking
  • Bot traffic and DDoS (in some cases)

How Does a WAF Work?

A WAF sits between users (clients) and the web application. It acts like a reverse proxy, inspecting incoming and outgoing traffic and applying rules to:

  • Allow legitimate traffic
  • Block or challenge malicious traffic
  • Log suspicious activities
  • Sanitize inputs to prevent exploits

Types of WAFs:

  1. Network-based WAF: Typically hardware appliances. Fast but costly.
  2. Host-based WAF: Software installed on the same server as the app. Customizable, but consumes local resources.
  3. Cloud-based WAF: Offered as a service (e.g., AWS WAF, Cloudflare WAF). Easy to deploy, scalable, and cost-effective.

In summary, a WAF protects web applications by filtering and monitoring HTTP traffic and blocking common attacks like XSS, SQL injection, and more — helping to secure applications at the edge before threats reach your backend.

Can ADC Be Between WAF and API Gateway ?
Internet --> WAF --> ADC --> API Gateway --> Backend Services        

An ADC can sit between WAF and API Gateway, depending on your requirements. Use this design when you need more performance optimization or HA than a pure API Gateway can offer. Here’s what each component typically does:

Article content

Alternative architectures:

  • WAF + ADC at Edge, API Gateway Later

Article content

  • WAF Embedded in API Gateway (Cloud-native)

Article content

Who Terminates SSL in such Architectures

Client → TLS → WAF (decrypts) → HTTP or TLS → ADC → API Gateway → App Servers

There are a few common patterns here:

1. WAF terminates TLS, forwards plain HTTP to ADC

  • WAF decrypts HTTPS, inspects traffic.
  • Sends unencrypted HTTP to ADC.
  • Good for deep inspection, but may expose traffic between WAF and ADC.

2. WAF terminates TLS, re-encrypts to ADC (TLS bridging)

  • WAF decrypts, inspects, and then re-encrypts using ADC's certificate.
  • More secure than plain HTTP forwarding.

3. WAF passes encrypted traffic to ADC (no TLS termination)

  • In this case, WAF just inspects metadata (like SNI, IPs, or via SSL inspection proxy techniques), but actual TLS termination happens at the ADC.
  • This is less common unless you need end-to-end encryption or can’t decrypt at the WAF.

Choosing AWS WAF vs Cloudfare WAF        

Choosing between AWS WAF and Cloudflare WAF depends on your infrastructure, application architecture, performance needs, and security priorities.

When to Use AWS WAF

  • You're Already Using AWS Services: Your application is deployed on AWS infrastructure (e.g., EC2, ALB, API Gateway, CloudFront). AWS WAF integrates tightly with these services, making setup and management easier.
  • You Want Fine-Grained, Custom Rule Control: You need highly customizable rules using AWS WAF’s rule builder, regex matching, rate-based rules, and scope-down statements.
  • Cost Predictability: AWS WAF pricing is rule-based and usage-based, which is good if you're managing smaller or more predictable traffic patterns.
  • You Want to Stay Fully Within AWS for Compliance: You require data residency, IAM control, and centralized logging (via CloudWatch) within the AWS ecosystem.

When to Use Cloudflare WAF

  • You Want Global Performance and DDoS Protection: Cloudflare is a global CDN and DNS provider with built-in DDoS protection, rate limiting, and traffic acceleration. It operates at massive global scale with low latency.
  • Your Apps Are Spread Across Multiple Clouds or On-Prem: Cloudflare works regardless of where your origin servers are — AWS, Azure, GCP, on-prem — and protects all traffic at the edge.
  • You Prefer Managed Rulesets and AI Threat Detection: Cloudflare offers managed WAF rules and automatic updates based on threat intelligence — ideal if you don’t want to maintain complex rules yourself.
  • You Want Advanced Edge Features: Cloudflare includes bot management, page rules, firewall rules, and edge computing (via Cloudflare Workers), which can complement WAF policies.
  • You Want a Generous Free Tier or Flat Pricing: For smaller apps or startups, Cloudflare’s free tier includes basic WAF and DDoS protection. Paid plans offer flat pricing, often cheaper than AWS at scale.

CDNs: CloudFront vs Cloudflare        

CloudFront and Cloudflare are often compared because both are Content Delivery Networks (CDNs) and offer security, performance, and edge services — but they have different strengths, and choosing between them depends on your use case, architecture, and budget.

Article content
CloudFront vs CloudFlare
Article content
CLoudFront vs ClourFlare

When to Use CloudFront

  • You're fully deployed on AWS and want deep integration.
  • You need to cache S3 content or integrate with Lambda@Edge.
  • You require fine-grained access control and AWS IAM policies.
  • You're okay with managing more configuration complexity for full control.

When to Use Cloudflare

  • You want a simple, fast, global CDN with powerful security.
  • You need built-in WAF, DDoS protection, SSL, and DNS.
  • Your application is multi-cloud, on-prem, or not AWS-hosted.
  • You want to use Cloudflare Workers for edge logic.
  • You're looking for cost-effective edge services (free or flat-rate plans).

Article content
Edge Computing        

Lambda@Edge

It allows you to customize content delivery and behavior at the edge — before requests reach your backend or before responses reach the client. You can use it to:

  • Modify or redirect incoming requests
  • Customize HTTP headers, cookies, or URLs
  • Perform A/B testing or geographic personalization
  • Add authentication logic
  • Rewrite paths or block requests
  • Compress or transform content


Cloudflare Workers

It is Cloudflare’s serverless platform that allows you to write and run JavaScript, TypeScript, or WASM code directly at Cloudflare's edge network, across 300+ global locations — without provisioning servers.

They allow you to intercept, modify, and respond to HTTP requests at the edge — right as they enter Cloudflare’s network — to create fast, programmable, globally distributed applications and APIs. It works like,

  • Workers run using a V8 engine (same as Chrome) in a lightweight, isolated environment.
  • No containers or VMs — code executes in milliseconds.
  • Workers are deployed to Cloudflare’s global edge, so they respond closer to the user

Article content

Advantages

  • Global by default — runs everywhere instantly
  • Ultra-low latency — executes near the user
  • Cost-effective — generous free tier; pay per request
  • Scalable — handles millions of concurrent requests
  • Secure — sandboxed, no access to OS-level processes
  • Integrates easily — with Cloudflare WAF, CDN, KV storage, Durable Objects, etc.



Rohann Niggam

QA Architect | Product Manager | Driving Quality, Innovation & Scalable Product Delivery

2mo

Definitely worth reading

To view or add a comment, sign in

Others also viewed

Explore topics