A sidecar is a secondary container (or sometimes called a deployment pattern) that runs alongside the primary application container within a Kubernetes pod — offering supplemental functionality to bolster the application, or often providing monitoring capabilities. While this container must communicate with the main application itself to act as a helper, it maintains its own separate processes. Sidecars exist predominantly in microservices environments.
Development teams often use sidecars to boost security, route requests, add detailed runtime logging, enable monitoring, or even provide load balancing. In the case of logging, the application will write logs to stdout (during normal operation) or stderr for diagnostics, warnings, or errors. A logging sidecar will collect these logs and stream them to their own preconfigured destination — often to a centralized database or observability tool.
Sidecars first came onto the Kubernetes scene in 2015, shortly after it debuted. First known as "composite" containers, they quickly grew into indispensable components for many development teams. They serve as a relatively lightweight and performant deployment method for essential services. Sidecars first became fully-native K8s infrastructure components with the release of Kubernetes v1.28.
How do sidecars work?
Sidecars are deployed alongside applications within Kubernetes pods. They can function as init containers since they spin up when the pod starts up, then persist throughout the life of the pod. Because sidecars run in isolation (communication aside), you can add, remove, or restart a sidecar container without impacting the application itself. This level of separation prevents any sidecar faults from causing crashes, slowdowns, or other issues that end users can feel.
Sidecars are considered data plane components, as they help compile and transport data back and forth between the application pod and a centralized control plane. This control plane helps manage both the application instances and sidecars.
Sidecar containers scale up and down readily with their respective pods (and the services within them), even though their lifecycles are independent. This is configurable via K8s' restartPolicy parameter. Kubernetes itself isn't necessarily aware of sidecars in the sense that it recognizes them as such, but sidecars instead form a deployment pattern that supports certain use cases. The type of container a sidecar is might impact scheduling and how Kubernetes can help manage it, however.
While it was initially challenging to have a sidecar last for the entirety of the pod lifetime without workarounds (and without control over startup order), newer sidecar implementations in Kubernetes have solved this. A sidecar could once prevent a pod from terminating and interfere with other processes. Nowadays, sidecar containers and main containers within the same pod share namespaces, which helps them coexist more gracefully while sharing CPU, memory, and storage.
Many use cases require the flexibility to harness a variety of programming languages. Sidecars can help since such helper processes don't need to be written in the same programming language as the application itself. This allows for a mixture of technologies. The sidecar often does not interact with application code or business logic, so compatibility isn't critical.
What are the benefits of sidecars?
Sidecars have evolved greatly since their debut over a decade ago. Versus other Kubernetes deployment models, sidecars offer plenty of advantages:
They're flexible and allow development teams to mix and match components written in different programming languages.
They can operate independently from the pod or application container's lifecycles, offering flexibility and persistence through reloads and restarts.
It's easy to delegate resources between sidecars, application containers, and other pod dependencies.
They're uniquely suited for cloud-native deployments due to their scalability and configurability in the face of expected runtime conditions.
They can be terminated suddenly via
SIGTERMsignal without shutting down gracefully, without negatively impacting the application or overall system.They can boost security by enabling SSL/TLS termination and mTLS functionality, while serving as a secure endpoint for various types of services.
They can be configured and deployed rapidly — scaling indefinitely in either direction based on application demand.
They're common within modern, microservices architectures and fit well within service mesh setups.
You’ve mastered one topic, but why stop there?
Our blog delivers the expert insights, industry analysis, and helpful tips you need to build resilient, high-performance services.
Can HAProxy function as a sidecar?
Yes! HAProxy and HAProxy One can be deployed as sidecar instances inside of a container network, within a microservices Kubernetes environment. Additionally, organizations can run HAProxy sidecars in conjunction with Consul containers to support service mesh deployments.
To learn more about sidecars in HAProxy, check out our guide, Building a Service Mesh With HAProxy and Consul.