Installs the Prometheus Adapter for the Custom Metrics API. Custom metrics are used in Kubernetes by Horizontal Pod Autoscale to scale workloads based upon your own metric pulled from an external metrics provider like Prometheus. This chart complements the metrics-server chart that provides resource only metrics.
This repository contains an implementation of the Kubernetes custom metrics API (custom.metrics.k8s.io/v1beta1), suitable for use with the auto scaling/v2 Horizontal Pod Autoscaler in Kubernetes 1.6+.
To use the chart, ensure the Prometheus.url and prometheus.port are configured with the correct Prometheus service endpoint. Additionally, the chart comes with a set of default rules out of the box but they may pull in too many metrics or not map them correctly for your needs. Therefore, it is recommended to populate rules.custom with a list of rules (see the config document for the proper format). Finally, to configure your Horizontal Pod Autoscaler to use the custom metric, see the custom metrics section of the HPA walkthrough.
The adapter takes the standard Kubernetes generic API server arguments (including those for authentication and authorization). By default, it will attempt to using Kubernetes in-cluster config to connect to the cluster.
It takes the following additional arguments specific to configuring how the adapter talks to Prometheus and the main Kubernetes cluster:
In order to expose metrics beyond CPU and memory to Kubernetes for autoscaling, you’ll need an “adapter” that serves the custom metrics API. Since you’ve got Prometheus metrics, it makes sense to use the Prometheus adapter to serve metrics out of Prometheus.
First, you’ll need to deploy the Prometheus Operator. Check out the getting started guide for the Operator to deploy a copy of Prometheus.
This walkthrough assumes that Prometheus is deployed in the prom namespace. Most of the sample commands and files are namespace-agnostic, but there are a few commands or pieces of configuration that rely on namespace. If you’re using a different namespace, simply substitute that in for prom when it appears.
Now that you’ve got a running copy of Prometheus that’s monitoring your application, you’ll need to deploy the adapter, which knows how to communicate with both Kubernetes and Prometheus, acting as a translator between the two.
The deploy/manifests directory contains the appropriate files for creating the Kubernetes objects to deploy the adapter.
See the deployment README for more information about the steps to deploy the adapter. Note that if you’re deploying on a non-x86_64 (amd64) platform, you’ll need to change the image field in the Deployment to be the appropriate image for your platform.
The default adapter configuration should work for this walkthrough and a standard Prometheus Operator configuration, but if you’ve got custom relabelling rules, or your labels above weren’t exactly namespace and pod, you may need to edit the configuration in the ConfigMap. The configuration walkthrough provides an overview of how configuration works.
This chart bootstraps a nginx-lego deployment on a Kubernetes cluster using the…
Provide a buffer for cluster autoscaling to allow over-provisioning of cluster…
Mcrouter is a Memcached protocol router for scaling Memcached deployments. It's…
Tell us about a new Kubernetes application
Never miss a thing! Sign up for our newsletter to stay updated.
Discover and learn about everything Kubernetes