Linkerd is an open source network proxy designed to be deployed as a service mesh: a dedicated layer for managing, controlling, and monitoring service-to-service communication within an application.

You may be interested in our service mesh comparison article.


Beyond adding reliability through circuit breaking and latency-aware load balancing, Linkerd automatically instruments top-line service metrics such as request volume, success rates, and latency distributions. Linkerd also provides request-level routing and multi-service discovery integration with a powerful language called dtabs.

In this section, you’ll find a rundown of Linkerd’s major features.

  • Load balancing: Linkerd provides multiple load-balancing algorithms that use real-time performance metrics to distribute load and reduce tail latencies across your application.
  • Circuit breaking: Linkerd includes automatic circuit breaking that will stop sending traffic to instances that are deemed to be unhealthy, giving them a chance to recover and avoiding cascading failures.
  • Service discovery: Linkerd integrates with various service discovery backends, helping you to reduce the complexity of your code by removing ad-hoc service discovery implementations.
  • Dynamic request routing: Linkerd enables dynamic request routing and rerouting, allowing you to set up staging services, canaries, blue-green deploys, cross-DC failover, and dark traffic with a minimal amount of configuration.
  • Retries and deadlines: Linkerd can automatically retry requests on certain failures and can timeout requests after a specified period.
  • TLS: Linkerd can be configured to send and receive requests with TLS, which you can use to encrypt communication across host boundaries without modification to your existing application code.
  • HTTP proxy integration: Linkerd can act as an HTTP proxy, which is widely supported by almost all modern HTTP clients, making it easy to integrate into existing applications.
  • Transparent Proxying: Linkerd can be used for transparent proxying by using the linkerd-inject utility to configure your host’s iptables rules.
  • GRPC: Linkerd supports both HTTP/2 and TLS, allowing it to route gRPC requests, enabling advanced RPC mechanisms such as bidirectional streaming, flow control, and structured data payloads.
  • Distributed tracing: Linkerd supports both distributed tracing and metrics instrumentation, providing uniform observability across all services.
  • Instrumentation: Linkerd supports both distributed tracing and metrics instrumentation, providing uniform observability across all services.

What problems does it solve?

Linkerd was built to solve the problems we found operating large production systems at companies like Twitter, Yahoo, Google and Microsoft. In our experience, the source of the most complex, surprising, and emergent behavior was usually not the services themselves, but the communication between services. Linkerd addresses these problems not just by controlling the mechanics of this communication but by providing a layer of abstraction on top of it.

By providing a consistent, uniform layer of instrumentation and control across services, Linkerd frees service owners to choose whichever language is most appropriate for their service. And by decoupling communication mechanics from application code, Linkerd allows you visibility and control over these mechanics without changing the application itself.

Today, companies around the world use Linkerd in production to power the heart of their software infrastructure. Linkerd takes care of the difficult, error-prone parts of cross-service communication—including latency-aware load balancing, connection pooling, TLS, instrumentation, and request-level routing—to make application code scalable, performant, and resilient.


Linkerd runs as a separate standalone proxy. As a result, it does not depend on specific languages or libraries. Applications typically use Linkerd by running instances in known locations and proxying calls through these instances—i.e., rather than connecting to destinations directly, services connect to their corresponding Linkerd instances, and treat these instances as if they were the destination services.

Under the hood, Linkerd applies routing rules, communicates with existing service discovery mechanisms, and load-balances over destination instances—all while instrumenting the communication and reporting metrics. By deferring the mechanics of making the call to Linkerd, application code is decoupled from:

  • knowledge of the production topology;
  • knowledge of the service discovery mechanism; and
  • load balancing and connection management logic.

Applications also benefit from a consistent, global traffic control mechanism. This is particularly important for polyglot applications, for which it is very difficult to attain this sort of consistency via libraries.

Linkerd instances can be deployed as sidecars (i.e. one instance per application service instance) or per-host. Since Linkerd instances are stateless and independent, they can fit easily into existing deployment topologies. They can be deployed alongside application code in a variety of configurations and with a minimum of coordination.

Tell us about a new Kubernetes application


Never miss a thing! Sign up for our newsletter to stay updated.


Discover and learn about everything Kubernetes