Kong is an API gateway. That means it is a form of middleware between computing clients and your API-based…
Kong is an API gateway. That means it is a form of middleware between computing clients and your API-based applications. Kong easily and consistently extends the features of your APIs. Some of the popular features deployed through Kong include authentication, security, traffic control, serverless, analytics & monitoring, request/response transformations, and logging.
As of version 1.0 Kong is now also a service mesh. You may be interested in our service mesh comparison article.
You’ve probably heard that Kong is built on Nginx, leveraging its stability and efficiency. But how is this possible exactly?
To be more precise, Kong is a Lua application running in Nginx and made possible by the lua-nginx-module. Instead of compiling Nginx with this module, Kong is distributed along with OpenResty, which already includes lua-nginx-module. OpenResty is not a fork of Nginx, but a bundle of modules extending its capabilities.
This sets the foundations for a pluggable architecture, where Lua scripts (referred to as ”plugins”) can be enabled and executed at runtime. Because of this, we like to think of Kong as a paragon of microservice architecture: at its core, it implements database abstraction, routing and plugin management. Plugins can live in separate code bases and be injected anywhere into the request lifecycle, all in a few lines of code.
Compared to other API gateways, Kong has many important advantages that are not found in the market today. Choose Kong to ensure your API gateway platform is:
These Kong advantages apply to both Kong Community Edition and Kong Enterprise Edition. The full set of Kong functionality is described in the publicly available documentation.
A typical Kong setup is made of two main components:
The Kong Server, built on top of NGINX, is the server that will actually process the API requests and execute the configured plugins to provide additional functionalities to the underlying APIs before proxying the request upstream.
Kong listens on several ports that must allow external traffic and are by default:
You can use the Admin API to configure Kong, create new users, enable or disable plugins, and a handful of other operations. Since you will be using this RESTful API to operate Kong, it is also extremely easy to integrate Kong with existing systems.
Kong uses an external data store to store its configuration such as registered APIs, Consumers, and Plugins. Plugins themselves can store every bit of information they need to be persisted, for example, rate-limiting data or Consumer credentials.
Kong maintains a cache of this data so that there is no need for a database roundtrip while proxying requests, which would critically impact performance. This cache is invalidated by the inter-node communication when calls to the Admin API are made. As such, it is discouraged to manipulate Kong’s datastore directly, since your nodes cache won’t be properly invalidated.
This architecture allows Kong to scale horizontally by simply adding new nodes that will connect to the same datastore and maintain their own cache.
Tell us about a new Kubernetes application
Never miss a thing! Sign up for our newsletter to stay updated.
Discover and learn about everything Kubernetes