Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously transforms it, and then sends it to your favorite “stash.”
To achieve multiple pipelines with this chart, current best practice is to maintain one pipeline per chart release. In this way, the configuration is simplified and pipelines are more isolated from one another.
Current best practice for ELK logging is to ship logs from hosts using Filebeat to logstash where persistent queues are enabled. Filebeat supports structured (e.g. JSON) and unstructured (e.g. log lines) log shipment.
To best utilize the combination of Beats, Logstash and Elasticsearch, load Beats-generated index templates into Elasticsearch as described here.
On a remote-to-Kubernetes Linux instance, you might run the following command to load that instance’s Beats-generated index template into Elasticsearch (Elasticsearch hostname will vary).
As data travels from source to store, Logstash filters parse each event, identify named fields to build the structure, and transform them to converge on a common format for easier, accelerated analysis and business value.
Logstash dynamically transforms and prepares your data regardless of format or complexity:
Take the helm of your Logstash deployments with the Pipeline Management UI, which makes orchestrating and managing your pipelines a breeze. The management controls also integrate seamlessly with the built-in security features to prevent any unintended rewiring.
Tell us about a new Kubernetes application
Never miss a thing! Sign up for our newsletter to stay updated.
Discover and learn about everything Kubernetes