Kong is an API gateway. That means it is a form of middleware between computing clients and your API-based applications. Kong easily and consistently extends the features of your APIs. Some of the popular features deployed through Kong include authentication, security, traffic control, serverless, analytics & monitoring, request/response transformations, and logging.

As of version 1.0 Kong is now also a service mesh. You may be interested in our service mesh comparison article.

What is Kong, technically?

You’ve probably heard that Kong is built on Nginx, leveraging its stability and efficiency. But how is this possible exactly?

To be more precise, Kong is a Lua application running in Nginx and made possible by the lua-nginx-module. Instead of compiling Nginx with this module, Kong is distributed along with OpenResty, which already includes lua-nginx-module. OpenResty is not a fork of Nginx, but a bundle of modules extending its capabilities.

This sets the foundations for a pluggable architecture, where Lua scripts (referred to as ”plugins”) can be enabled and executed at runtime. Because of this, we like to think of Kong as a paragon of microservice architecture: at its core, it implements database abstraction, routing and plugin management. Plugins can live in separate code bases and be injected anywhere into the request lifecycle, all in a few lines of code.


  • Kubernetes 1.8+ with Beta APIs enabled.
  • PV provisioner support in the underlying infrastructure if persistence is needed for Kong datastore.


  • Authentication: Protect your services with an authentication layer.
  • Traffic Control: Manage, throttle, and restrict inbound and outbound API traffic.
  • Analytics: Visualize, inspect, and monitor APIs and microservice traffic.
  • Transformations: Transform requests and responses on the fly.
  • Logging: Stream request and response data to logging solutions.
  • Serverless: Invoke serverless functions via APIs.
  • Add your API on Kong: After installing and starting Kong, use the Admin API on port 8001 to add a new API. Kong will route every incoming request with the specified public DNS to the associated target URL.
  • Add Plugins on the API: Then add extra functionality by using Kong Plugins. You can also create your own plugins.
  • Make a Request: …and then you can consume the API on port 8000 by requesting the public DNS specified. In production point the public DNS to Kong. It also supports URL path routing.

Why use Kong?

Compared to other API gateways, Kong has many important advantages that are not found in the market today. Choose Kong to ensure your API gateway platform is:

  • Radically Extensible
  • Blazingly Fast
  • Open Source
  • Platform Agnostic
  • Cloud Native
  • RESTful

These Kong advantages apply to both Kong Community Edition and Kong Enterprise Edition. The full set of Kong functionality is described in the publicly available documentation.

How does Kong work?

A typical Kong setup is made of two main components:

  • Kong’s server, based on the widely adopted NGINX HTTP server, which is a reverse proxy processing your clients’ requests to your upstream services.
  • Kong’s datastore, in which the configuration is stored to allow you to horizontally scale Kong nodes. Apache Cassandra and PostgreSQL can be used to fulfill this role.
  • Kong needs to have both these components set up and operational.

Kong server

The Kong Server, built on top of NGINX, is the server that will actually process the API requests and execute the configured plugins to provide additional functionalities to the underlying APIs before proxying the request upstream.

Kong listens on several ports that must allow external traffic and are by default:

  • 8000 for proxying. This is where Kong listens for HTTP traffic. See proxy_listen.
  • 8443 for proxying HTTPS traffic. See proxy_listen_ssl.
  • Additionally, those ports are used internally and should be firewalled in production usage:
  • 8001 provides Kong’s Admin API that you can use to operate Kong. See admin_api_listen.
  • 8444 provides Kong’s Admin API over HTTPS. See admin_api_ssl_listen.

You can use the Admin API to configure Kong, create new users, enable or disable plugins, and a handful of other operations. Since you will be using this RESTful API to operate Kong, it is also extremely easy to integrate Kong with existing systems.

Kong datastore

Kong uses an external data store to store its configuration such as registered APIs, Consumers, and Plugins. Plugins themselves can store every bit of information they need to be persisted, for example, rate-limiting data or Consumer credentials.

Kong maintains a cache of this data so that there is no need for a database roundtrip while proxying requests, which would critically impact performance. This cache is invalidated by the inter-node communication when calls to the Admin API are made. As such, it is discouraged to manipulate Kong’s datastore directly, since your nodes cache won’t be properly invalidated.

This architecture allows Kong to scale horizontally by simply adding new nodes that will connect to the same datastore and maintain their own cache.

Tell us about a new Kubernetes application


Never miss a thing! Sign up for our newsletter to stay updated.


Discover and learn about everything Kubernetes