The stolon is a cloud-native PostgreSQL manager for PostgreSQL high availability. It’s cloud-native because it’ll let you keep a high available PostgreSQL inside your containers (kubernetes integration) but also on every other kind of infrastructure (cloud IaaS, old style infrastructures etc…)


  • Leverages PostgreSQL streaming replication.
  • Resilient to any kind of partitioning. While trying to keep the maximum availability, it prefers consistency over availability.
  • kubernetes integration letting you achieve PostgreSQL high availability.
  • Uses a cluster store like etcd, consul or kubernetes API server as a highly available data store and for leader election
    Asynchronous (default) and synchronous replication.
  • Full cluster setup in minutes.
  • Easy cluster administration
  • Can do point in time recovery integrating with your preferred backup/restore tool.
  • Standby cluster (for multi-site replication and near-zero downtime migration).
  • Automatic service discovery and dynamic reconfiguration (handles Postgres and stolon processes changing their addresses).
  • Can use pg_rewind for fast instance resynchronization with current master.


Stolon is composed of 3 main components

  • keeper: it manages a PostgreSQL instance converging to the cluster view computed by the leader sentinel.
  • sentinel: it discovers and monitors keepers and proxies and computes the optimal cluster view.
  • proxy: the client’s access point. It enforces connections to the right PostgreSQL master and forcibly closes connections to old masters.

Project Status

Stolon is under active development and used in different environments. Probably it’s on-disk format (store hierarchy and key contents) will change in the future to support new features. If a breaking change is needed it’ll be documented in the release notes and an upgrade path will be provided.

Anyway, it’s quite easy to reset a cluster from scratch keeping the current master instance working and without losing any data.


  • PostgreSQL 11, 10 or 9 (9.4, 9.5, 9.6)
  • etcd2 >= v2.0, etcd3 >= v3.0, consul >= v0.6 or kubernetes >= 1.8 (based on the store you’re going to use)
  • OS: currently stolon is tested on GNU/Linux (with reports of people using it also on Solaris, *BSD and Darwin)

Why clients should use the stolon proxy?

Since stolon by default leverages consistency over availability, there’s the need for the clients to be connected to the current cluster elected master and be disconnected to unelected ones. For example, if you are connected to the current elected master and subsequently the cluster (for any valid reason, like network partitioning) elects a new master, to achieve consistency, the client needs to be disconnected from the old master (or it’ll write data to it that will be lost when it resyncs). This is the purpose of the stolon proxy.

Why didn’t you use an already existing proxy like haproxy?

For our need to forcibly close connections to unelected masters and handle keepers/sentinel that can come and go and change their addresses we implemented a dedicated proxy that’s directly reading it’s stated from the store. Thanks to going goroutines it’s very fast.

We are open to alternative solutions (PRs are welcome) like using haproxy if they can meet the above requirements. For example, an hypothetical haproxy based proxy needs a way to work with changing ip addresses, get the current cluster information and being able to forcibly close a connection when an haproxy backend is marked as failed (as a note, to achieve the latter, a possible solution that needs testing will be to use the on-marked-down shutdown-sessions haproxy server option).

Does the stolon proxy send read-only requests to standbys?

Currently, the proxy redirects all requests to the master. There is a feature request for using the proxy also for standbys but it’s low in the priority list.

If your application wants to query the hot standbys, currently you can read the standby dbs and their status form the cluster data directly from the store (but be warned that this isn’t meant to be stable).

Why is shared storage and fencing not necessary with stolon?

stolon eliminates the requirement of a shared storage since it uses Postgres streaming replication and can avoid the need of fencing (killing the node, removing access to the shared storage etc…) due to its architecture:

It uses etcd/consul as the first step to determine which components are healthy.
The stolon-proxy is a sort of fencer since it’ll close connections to old masters and direct new connections to the current master.

How does stolon differ from pure kubernetes high availability?

A pure kubernetes approach to achieve Postgres high availability is using persistent volumes and statefulsets (petsets). By using persistent volumes means you won’t lose any transaction. k8s currently requires fencing to avoid data corruption when using statefulsets with persistent volumes (see

stolon instead uses postgres streaming replication to achieve high availability. To avoid losing any transaction you can enable synchronous replication.

With stolon you also have other pros:

  • Currently k8s failed node detection/pod eviction takes minutes while stolon will by default detects failures in seconds.
  • On cloud providers like aws, an ebs volume is tied to an availability zone so you cannot failover to another availability zone. With stolon you can.
  • You can tie instance data to a specific node if you don’t want to use persistent volumes like aws ebs.
  • You can easily deploy and manage new clusters
  • update the cluster spec
  • update postgres parameters
  • Do point in time recovery
  • Create master/standby stolon clusters (future improvement).
  • Easily scale stolon components.


Tell us about a new Kubernetes application


Never miss a thing! Sign up for our newsletter to stay updated.


Discover and learn about everything Kubernetes