10 Reasons Why it is OK to Hate Database Proxies, but Love Sidecars!
Download 10 Reasons Why it is OK to Hate Database Proxies, but Love Sidecars!
A brief history of the Database Proxy
A proxy is an interception service that sits between the client and the server. When the proxy is deployed close to the client, it is called a forward proxy. When the proxy is deployed closer to the server such that the clients do not know about the origin of the server, it is known as a reverse proxy.
A database proxy (ProxySQL, MaxScale, and others) is basically a reverse proxy built to provide benefits like security, scalability, and high availability for databases, key-value stores, and message queues.
Before highly distributed data repositories (like MongoDB and Cassandra) became popular, a database proxy enabled scaling and performance by providing connection pooling to the backend data repositories, ensuring high availability by routing requests to a healthy data backend (standby when the primary failed), and reducing failover time. Such proxies are generally considered L4 or SQL-agnostic proxies and include HAProxy, Nginx, and similar tools.
With applications moving to the cloud, and data volumes skyrocketing, modern data repositories started providing scalability and high availability functionality using data sharding and replication with a distributed coordinator-worker architecture. To shield the application logic from the underlying topology changes, SQL-aware database proxies such as ProxySQL and MaxScale started gaining traction. These proxies can perform tasks such as SQL read/write query routing by directing read queries to workers and write queries to the master in the coordinator. SQL-aware proxies are also used in scenarios when there is a need to operate at the SQL layer to cache SQL query responses for performance, or to rewrite and block certain SQL queries for security.
With the maturity of container technology, especially Docker, service oriented architectures using microservices as their composable units started gaining widespread popularity. Cloud-native applications started to use microservices as their building blocks, lending themselves to the DevOps methodology of Continuous Integration and Continuous Deployment.
While new architectures based on microservices have resulted in many benefits, they have exposed challenges, specifically around security and traffic management. Communication between these disaggregated microservices has resulted in an explosion in east-west traffic, with no concrete perimeter where security rules can be enforced and no single ingress/egress point where traffic management can be performed. As a result, the traditional model of deploying a proxy between the application and the data repository (database or data warehouse) no longer works in this new world.
Cyral solves this problem with our stateless interception service that can be deployed using a sidecar pattern.
Designing a Data Layer Sidecar for the Cloud Native World
With high availability and scalability often baked into the architecture and deployment model of cloud-native applications, Cyral’s data layer sidecar essentially acts as a circuit breaker between applications and data, protecting data repositories in an environment where traffic patterns are less predictable than they were in traditional deployment models. Because Cyral is simple to deploy using service orchestration tools like Kubernetes, teams can ensure their data protection is always on—for all their repositories.
While the Cyral sidecar occupies the same position as a proxy, its architecture is designed for cloud architectures:
1. Stateless operation: Cyral sidecars operate statelessly to support scale-out and high availability. Traditional application proxies managed session state for query clients, as suited their role in helping older database architectures cope with heavy workloads. Today, data repositories manage data layer connections themselves, so the Cyral sidecar operates statelessly. As a result, your team can protect a single data repository with many sidecars running in a high-availability configuration. Stateless operation also gives your team the option to deploy a sidecar in a fail-open mode.
2. Output Filtering: For queries that read data, the Cyral sidecar passes the query immediately to the data layer without delay. If Cyral determines that the request is malicious or disallowed, it blocks the query’s results. By decoupling Cyral’s analysis of the request from the data repository’s response, we parallelize the work and minimize delays for read operations.
3. SaaS-based control plane to deploy and manage many sidecars: Cloud-based operations usually include many data repositories of different types. Cyral lets the DevOps team manage security across a wide set of repositories with deployment templates to quickly apply Cyral to each new repository. Cyral’s single management interface allows your team to enforce security policies and react to threats across all repositories.
Cyral sidecars can be deployed in your cloud or on-premises environment as a Kubernetes service, autoscaling group, cloud function or host-based install. Data flows and sensitive information stay inside the environment where the sidecar is deployed, creating no risk of spillage.