The open source project kube2iam is Amazon aws specific. It allows your pod to have access to an aws role, without other pods on the same node having the same privileges. For example, if your pod needs to use the aws cli command to copy/sync files with an s3 bucket you can use kube2iam in your cluster so that ONLY your pod has those rights.

Amazon has stated at Kubecon 2017 that their EKS solution ( Amazon hosted Kubernetes cluster), that they intend to use kube2iam to handle roles and permissions for pods.

Setup kube2iam in your cluster

To have a seamless deployment of kube2iam in your cluster there are some steps to follow that aren’t exactly in the readme files. The person responsible for deploying the kube2iam software should deploy it as a daemonset in the cluster so that every node has it. Also until the development team or ops team is comfortable with kube2iam you should probably deploy it with the –verbose and –debug flags.

To ensure that all the requests for credentials get caught, I personally would recommend using the iptables=true option. Although from personal experience I have seen where removing this option or setting it to false doesn’t remove the ip table entry. You actually have to restart the node. Finally, if you are using a network overlay with Kubernetes ( if not you should be ), you need to tell it what host interface to use. Personally, we use Calico.

One final note to mention. If you are using RBAC with your Kubernetes cluster you are going to need to set up some Kubernete Roles and Service Accounts for kube2iam.

Context

Traditionally in AWS, service level isolation is done using IAM roles. IAM roles are attributed through instance profiles and are accessible by services through the transparent usage by the aws-sdk of the ec2 metadata API. When using the aws-sdk, a call is made to the EC2 metadata API which provides temporary credentials that are then used to make calls to the AWS service.

Problem statement

The problem is that in a multi-tenanted container based world, multiple containers will be sharing the underlying nodes. Given containers will share the same underlying nodes, providing access to AWS resources via IAM roles would mean that one needs to create an IAM role which is a union of all IAM roles. This is not acceptable from a security perspective.

Solution

The solution is to redirect the traffic that is going to the ec2 metadata API for Docker containers to a container running on each instance, make a call to the AWS API to retrieve temporary credentials and return these to the caller. Other calls will be proxied to the EC2 metadata API. This container will need to run with host networking enabled so that it can call the EC2 metadata API itself.

AWS STS Endpoint and Regions

STS is a unique service in that it is actually considered a global service that defaults to the endpoint at https://sts.amazonaws.com, regardless of your region setting. However, unlike other global services (e.g. CloudFront, IAM), STS also has regional endpoints which can only be explicitly used programmatically. The use of a regional sts endpoint can reduce the latency for STS requests.

kube2iam supports the use of STS regional endpoints by using the –use-regional-sts-endpoint flag as well as by setting the appropriate AWS_REGION environment variable in your daemonset environment. With these two settings configured, kube2iam will use the STS API endpoint for that region. If you enable debug level logging, the sts endpoint used to retrieve credentials will be logged.

Metrics

kube2iam exports a number of Prometheus metrics to assist with monitoring the system’s performance. By default, these are exported at the /metrics HTTP endpoint on the application server port (specified by –app-port). This does not always make sense, as anything with access to the application server port can assume roles via kube2iam. To mitigate this use the –metrics-port argument to specify a different port that will host the /metrics endpoint.

All of the exported metrics are prefixed with kube2iam_. See the Prometheus documentation for more information on how to get up and running with Prometheus.

Development loop

  • Use minikube to run cluster locally
  • Build and push dev image to docker hub: make docker-dev DOCKER_REPO=<your docker hub username>
  • Update deployment.yaml as needed
  • Deploy to local kubernetes cluster: kubectl create -f deployment.yaml or kubectl delete -f deployment.yaml && kubectl create -f deployment.yaml
  • Expose as service: kubectl expose deployment kube2iam –type=NodePort
  • Retrieve the services url: minikube service kube2iam –url
  • Test your changes e.g. curl -is $(minikube service kube2iam –url)/healthz

Tell us about a new Kubernetes application

Newsletter

Never miss a thing! Sign up for our newsletter to stay updated.

About

Discover and learn about everything Kubernetes

Navigation