Evolution of the Reverse Proxy
Reverse proxies  play an important role in the day to day functioning of the Internet. A reverse proxy is typically deployed in front of a pool of application servers to help meet the scalability, high availability and security requirements of the incoming web traffic. As Internet traffic exploded, they became an integral part of the Internet infrastructure in the form of ADCs, WAFs, CDNs, etc.
A Historical Perspective
Originally reverse proxies were hardware-based, utilizing sophisticated FPGAs and custom ASICs to accelerate the growing north-south traffic. Deploying these proxies required careful network design to make sure that existing topology constraints and assumptions were not violated. Since web traffic could spike anytime due to news cycles, marketing promotions, etc, sophisticated traffic engineering was required to ensure availability such as configuring 1+1 resiliency . Because these proxies were not multi-tenant, using them for multiple applications resulted in a shared configuration namespace leading to complex bugs in production. Additionally, lack of multi-tenancy meant no resource isolation – traffic spike for one application could bog down every other application. As the Internet grew and applications scaled, this became a maintenance nightmare for IT teams.
Due to the configuration complexity, developers building these applications could not be given access to the proxies. This prevented them from making any app-specific adjustments or doing any pre-release tests in canary deployments. Onboarding new applications required highly specialized IT support and painful approval processes.
When companies started moving their infrastructure to the cloud, they couldn’t rely on simply bolting a virtual instance of their traditional proxy in the cloud. Dealing with the DevOps-first cloud world and its requirements such as on-demand capacity allocation, dynamic service insertion, incremental rollouts, etc. necessitated a rethinking of the proxy architecture itself.
This led to the emergence of a new scale-out elastic architecture for reverse proxies, with first-class support for multi-tenancy and an API-first design. It enabled capabilities like high availability beyond the traditional 1+1 redundancy model, resiliency, on-demand scaling, and API-driven management. At the same time, staple arguments for sticking to hardware proxies grew weak, as CPUs became much faster – and compute-intensive features such as support for TLS protocol with advanced Elliptic Curve Cryptography (e.g. ECDHE), could be implemented natively in the ISA . This further reduced the benefit of hardware proxies and the world started shifting towards this new architecture.
Fun fact: there is now a lot of data showing that TLS/SSL does not incur more than 1% overhead of the CPU load .
These cloud-based reverse proxies with their elastic scalability, REST APIs for management and self-service capabilities made them easier to test and deploy, significantly simplifying the lives of IT teams, and enabled their transition to DevOps.
Containers, Microservices and the Infrastructure-as-Code movement
With the widespread adoption of microservices  and containers, we are now in the midst of another shift – to a cloud-native paradigm. New lightweight services are being written from the ground up, which communicate with each other using remote procedure calls (RPCs).
The disaggregation of monolithic applications into microservices created the desire to abstract out all infrastructure and networking state (load balancing, tls termination, resiliency, etc.) from the services itself. The emergence of tools such as Kubernetes  enabled teams to declaratively define the infrastructure configurations and policies. This declarative control in turn allows developers to use the same versioning that is used for their application source code for the infrastructure where the application is going to run. This principle is popularly known as Infrastructure-as-Code (IaC).
Smaller, discrete services made coding easier, but resulted in much more complex traffic patterns within the application because these microservices are inherently numerous and ephemeral. This led to challenges providing services such as reliable communication, observability, security, etc. for the dynamic mesh of microservices without complicating the lives of developers. To address these problems, service mesh architectures such as Linkerd  and Istio  emerged, to help easily network the microservices providing policy-based communication services to the workload running in the mesh. They provide the ability to declaratively define the desired communication behavior and traffic flow policies.
The reverse proxy has emerged as a core component in the data plane of this service mesh architecture , taking on much greater responsibility than the traditional proxy. In this world, reverse proxies mediate all ingress and egress traffic for the services in the mesh, provide dynamic service discovery, load balancing, TLS termination, HTTP2  and gRPC Remote Procedure Call  proxying, health checks, staged rollouts and metrics for unprecedented observability into the services by instrumenting network traffic.
Additionally, proxy services are getting deployed as a sidecar , with the application service and the sidecar colocated in the same compute unit. This sidecar pattern allows the existing services with immediate benefits such as service discovery, observability, circuit-breakers, policy-based control over network traffic, automated authentication, encryption between all services .
For application developers, this means access to lots of features without the need to include heavy-duty libraries and dependencies that are language specific enabling polyglot development. Furthermore, they no longer have to worry about the underlying infrastructure state, thus freeing them to focus on developing the business logic they need and enabling them to make high impact changes with minimal effort using CI/CD pipelines.
As a technologist, I spent a decade building hardware gateways and cloud-based proxies at Redback Networks and Avi Networks respectively. Recently, F5 acquired Shape Security for $1B. Clearly, proxies are here to stay.
I am personally now focused more on the various cloud-native constructs and sidecars as we continue to move into a DevOps world. Over the coming days, I look forward to sharing more of what I’ve learned, along with some perspective gained from experience.
-  Reverse Proxy
-  Towards a Next Generation Data Center Architecture
-  Ananta: Cloud Scale Load Balancing
-  Intel Advanced Encryption Standard New Instructions
-  Is TLS Fast Yet?
-  Microservices architecture
-  Kubernetes: Production-grade Container Orchestration
-  Linkerd: Service mesh for Kubernetes and Beyond
-  Istio: Connect, Secure, Control and Observe services
-  Envoy: Service Proxy for Cloud Native Applications
-  Hypertext Transfer Protocol Version 2 (HTTP/2)
-  gRPC: A high performance, open-source universal RPC framework
-  Sidecar Pattern
- Avi Networks: Software Load Balancer
- AWS: Elastic Load Balancing
Image by Alaina Nicol via the OpenIDEO Cybersecurity Visuals Challenge under a Creative Commons Attribution 4.0 International License
Observability Metrics for Troubleshooting Database Performance
In this blog post, we show how Cyral’s observability metrics can be used by DevOps and SRE teams for tracking usage of and diagnosing performance …
Life at Cyral: All-Hands with Gokul Rajaram
Part two of our new Cyral community blogpost series (find part one here) finds the Parliament of Owls continuing our discussion on product leadership and …