A ZooKeeper “personality” for etcd. Point a ZooKeeper client at zetcd to dispatch the operations on an etcd cluster.

Protocol encoding and decoding heavily based on go-zookeeper. This chart runs zetcd, a ZooKeeper “personality” for etcd.


  • Kubernetes 1.4+ with Beta APIs enabled
  • Suggested: PV provisioner support in the underlying infrastructure to support backups of etcd

Running zetcd on Docker

Official docker images of tagged zetcd releases for containerized environments are hosted at quay.io/etcd-io/zetcd. Use docker run to launch the zetcd container with the same configuration as the go get an example:

docker run –net host -t quay.io/etcd-io/zetcd -endpoints localhost:2379


In cross-checking mode, zetcd dynamically tests a fresh isolated “candidate” zetcd cluster against a fresh isolated ZooKeeper “oracle” cluster for divergences. This mode dispatches requests to both zetcd and ZooKeeper, then compares the responses to check for equivalence. If the responses disagree, it is flagged in the logs. Use the flags -zkbridge to configure a ZooKeeper endpoint and -oracle zk to enable checking.

Cross-check zetcd’s ZooKeeper emulation with a native ZooKeeper server endpoint at localhost:2182 like so:

zetcd –zkaddr –endpoints localhost:2379 –debug-zkbridge localhost:2182 –debug-oracle zk –logtostderr -v 9

zetcd: running ZooKeeper apps without ZooKeeper

Distributed systems commonly rely on a distributed consensus to coordinate work. Usually, the systems providing distributed consensus guarantee information is delivered in order and never suffer split-brain conflicts. The usefulness, but rich design space, of such systems is evident by the proliferation of implementations; projects such as chubby, ZooKeeper, etcd, and consul, despite differing in philosophy and protocol, all focus on serving similar basic key-value primitives for distributed consensus. As part of making etcd the most appealing foundation for distributed systems, the etcd team developed a new proxy, zetcd, to serve ZooKeeper requests with an unmodified etcd cluster.

ZooKeeper is the first popular open source software in this vein, making it the preferred backend for many distributed systems. These systems would conceptually work with etcd as well, but they don’t in practice for historical reasons. An etcd cluster can’t drop-in for ZooKeeper; etcd’s data model and client protocol is incompatible with ZooKeeper applications. Neither can ZooKeeper applications be expected to natively support etcd; if the system already works, there’s little motivation to further complicate it with new backends. Fortunately, the etcd v3 API is expressive enough to emulate ZooKeeper’s data model client-side with an ordinary proxy: zetcd, a new open source project developed by the etcd team. Today marks zetcd’s first beta release, v0.0.1, setting the stage for managing and deploying zetcd in production systems.

The zetcd proxy sits in front of an etcd cluster and serves an emulated ZooKeeper client port, letting unmodified ZooKeeper applications run on top of etcd. At a high level, zetcd ingests ZooKeeper client requests, fits them to etcd’s data model and API, issues the requests to etcd, then returns translated responses back to the client. The proxy’s performance is competitive with ZooKeeper proper and simplifies ZooKeeper cluster management with etcd features and tooling. This post will show how to use zetcd, how zetcd works, and share some performance benchmarks.

Getting started with zetcd

All zetcd needs to get running is a go compiler, an internet connection to fetch the source code, and a system that can run etcd. The following example will build zetcd from source and run a few ZooKeeper commands against it. This is not suggested for serious deployments due to building etcd and zetcd from development branches, but it’s the simplest way to give it a try.

Tell us about a new Kubernetes application


Never miss a thing! Sign up for our newsletter to stay updated.


Discover and learn about everything Kubernetes