Last Updated on
Reading Time: 8 minutesWe recently added the AWS IAM Authenticator to our custom configured (non-EKS) Kubernetes clusters running in AWS. For an automated installation the process involves pre-generating some config and certs, updating a line in the API Server manifest and installing a daemonset.
In this blog I’ll detail how we set things up iteratively and provide some useful commands to help confirm each component works. These same commands can be used when troubleshooting issues later on.
Our motivation for installing the AWS IAM Authenticator was to open up kubectl access on clusters to different groups with more granular permissions. We already manage IAM users for everyone that requires access and decided to use this same identity for Kubernetes cluster access.
The documentation is really good so I’d recommend reading through this and completing those steps manually first. Then use the information below to compare when automating.
The first piece we automated was the pre-generation of the certs and kubeconfig. We do this to avoid needing to restart the API Server after the daemonset is installed. In Ansible our configuration looks like this.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
- name: download aws-iam-authenticator binary get_url: url: https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/0.4.0-alpha.1/aws-iam-authenticator_0.4.0-alpha.1_linux_amd64 dest: /usr/bin/aws-iam-authenticator mode: 0755 owner: root - name: create /var/aws-iam-authenticator file: path: /var/aws-iam-authenticator state: directory owner: root group: root mode: 0755 - name: initialise aws-iam-authenticator command: chdir=/var/aws-iam-authenticator aws-iam-authenticator init --cluster-id {{ environment_name }}.{{ region }}.{{ environment_type }} args: creates: /var/aws-iam-authenticator/aws-iam-authenticator.kubeconfig - name: make /var/aws-iam-authenticator readable file: path: /var/aws-iam-authenticator mode: 0755 recurse: yes - name: create /etc/kubernetes/aws-iam-authenticator file: path: /etc/kubernetes/aws-iam-authenticator state: directory owner: root group: root mode: 0755 - name: copy aws-iam-authenticator.kubeconfig to /etc/kubernetes/aws-iam-authenticator/kubeconfig.yaml copy: src: /var/aws-iam-authenticator/aws-iam-authenticator.kubeconfig dest: /etc/kubernetes/aws-iam-authenticator/kubeconfig.yaml mode: 0755 |
We set the cluster-id to environment_name.region.environment_type which maps to how we configure DNS for our clusters. The args: creates: on the initialise task means this only runs once.
Now we can run this and make sure we have these files in /var/aws-iam-authenticator.
1 2 3 4 5 6 7 |
# ls -la /var/aws-iam-authenticator total 12 drwxr-xr-x. 2 root root 77 Mar 13 09:58 . drwxr-xr-x. 19 root root 283 Mar 14 19:04 .. -rwxr-xr-x. 1 root root 2036 Mar 13 09:58 aws-iam-authenticator.kubeconfig -rwxr-xr-x. 1 root root 1147 Mar 13 09:58 cert.pem -rwxr-xr-x. 1 root root 1679 Mar 13 09:58 key.pem |
Also cat the kubeconfig file copied to /etc/kubernetes/aws-iam-authenticator/kubeconfig.yaml to make sure it has some yaml inside. You shouldn’t need to change this kubeconfig file after generation.
Now it’s time to add some configuration to the kubernetes API Server configuration manifest. The API Server is configured to use the pre-generated kubeconfig when webtoken authentication is used.
Here’s a complete manifest used with Kubernetes 1.11.8.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
apiVersion: v1 kind: Pod metadata: name: kube-apiserver namespace: kube-system labels: name: kube-apiserver spec: hostNetwork: true containers: - name: kube-apiserver image: gcr.io/google_containers/hyperkube:{{ kube_version }} command: - /hyperkube - apiserver - --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,Priority,ResourceQuota - --advertise-address={{ advertise_ip }} - --allow-privileged=true - --anonymous-auth=false - --apiserver-count={{ node_count }} - --audit-log-maxage=30 - --audit-log-maxbackup=5 - --audit-log-maxsize=50 {% if region != "local" %} - --cloud-provider=aws {% endif %} - --external-hostname={{ master_fqdn }} - --audit-policy-file=/etc/kubernetes/audit-policy/apiserver-audit-policy.yaml - --audit-log-path=/var/log/apiserver/audit.log - --authorization-mode=Node,RBAC - --authentication-token-webhook-config-file=/etc/kubernetes/aws-iam-authenticator/kubeconfig.yaml - --bind-address=0.0.0.0 - --insecure-port=0 - --client-ca-file=/etc/kubernetes/ssl/kubernetes/ca.pem - --etcd-servers={{ etcd_servers }} - --etcd-cafile=/etc/kubernetes/ssl/etcd/ca.pem - --etcd-certfile=/etc/kubernetes/ssl/etcd/client-apiserver.pem - --etcd-keyfile=/etc/kubernetes/ssl/etcd/client-apiserver-key.pem - --kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes/client-system:node:{{ nodename }}.pem - --kubelet-client-key=/etc/kubernetes/ssl/kubernetes/client-system:node:{{ nodename }}-key.pem - --profiling=false - --proxy-client-cert-file=/etc/kubernetes/ssl/frontproxy/client-front-proxy.pem - --proxy-client-key-file=/etc/kubernetes/ssl/frontproxy/client-front-proxy-key.pem - --requestheader-allowed-names=front-proxy - --requestheader-client-ca-file=/etc/kubernetes/ssl/frontproxy/ca.pem - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --runtime-config=api/all=true,authentication.k8s.io/v1beta1=true - --secure-port=443 - --service-account-key-file=/etc/kubernetes/ssl/kubernetes/server-service-accounts.pem - --service-cluster-ip-range={{ service_subnet }} - --tls-cert-file=/etc/kubernetes/ssl/kubernetes/server-kubernetes.pem - --tls-private-key-file=/etc/kubernetes/ssl/kubernetes/server-kubernetes-key.pem - --v=2 {% if kube_secret_encryption_key is defined %} - --experimental-encryption-provider-config=/etc/kubernetes/config/secrets.conf {% endif %} ports: - containerPort: 443 hostPort: 443 name: https volumeMounts: - mountPath: /etc/kubernetes/ssl name: ssl-certs-kubernetes readOnly: true - mountPath: /var/log/apiserver name: audit-log readOnly: false - mountPath: /etc/cfssl/etcd name: cfssl-etcd readOnly: true - mountPath: /etc/pki name: pki readOnly: true - mountPath: /etc/kubernetes/config name: config readOnly: true - mountPath: /etc/kubernetes/aws-iam-authenticator name: aws-iam-authenticator readOnly: true - mountPath: /etc/kubernetes/audit-policy name: audit-policy readOnly: true {% if region == "local" %} - mountPath: /etc/ssl/certs/ca-certificates.crt name: fake-ec2-cert-bundle readOnly: true {% endif %} volumes: - hostPath: path: /etc/kubernetes/ssl name: ssl-certs-kubernetes - hostPath: path: /etc/pki name: pki - hostPath: path: /var/log/apiserver name: audit-log - hostPath: path: /etc/cfssl/etcd name: cfssl-etcd - hostPath: path: /etc/kubernetes/config name: config - hostPath: path: /etc/kubernetes/aws-iam-authenticator name: aws-iam-authenticator - hostPath: path: /etc/kubernetes/audit-policy name: audit-policy {% if region == "local" %} - hostPath: path: /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem name: fake-ec2-cert-bundle {% endif %} |
As part of this work I also added audit-policy configuration. For now we’re just using the default policy given as an example in the Kubernetes docs.
Changes made to /etc/kubernetes/manifests/kube-apiserver.yaml are automatically picked up by Kubelet and the API Server is restarted. You can troubleshoot any problems with the API Server configuration using journalctl -u kubelet -n –no-pager.
Next we install a deamonset that runs an AWS IAM Authenticator pod on each master node.
I’d recommend for your first attempt keeping this extremely simple so you can test and debug issues. An example daemonset and configmap is shown below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
--- apiVersion: v1 kind: ConfigMap metadata: namespace: kube-system name: aws-iam-authenticator labels: k8s-app: aws-iam-authenticator data: config.yaml: | clusterID: {{ environment_name }}.{{ region }}.{{ environment_type }} defaultRole: arn:aws:iam::{{ accountid }}:role/KubernetesAdmin server: mapUsers: groups: - system:masters --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: namespace: kube-system name: aws-iam-authenticator labels: k8s-app: aws-iam-authenticator spec: updateStrategy: type: RollingUpdate template: metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" labels: k8s-app: aws-iam-authenticator spec: hostNetwork: true nodeSelector: role: "master" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master - key: CriticalAddonsOnly operator: Exists containers: - name: aws-iam-authenticator image: gcr.io/heptio-images/authenticator:v0.3.0 args: - server - --config=/etc/aws-iam-authenticator/config.yaml - --state-dir=/var/aws-iam-authenticator - --kubeconfig-pregenerated=true resources: requests: memory: 20Mi cpu: 10m limits: memory: 20Mi cpu: 100m volumeMounts: - name: config mountPath: /etc/aws-iam-authenticator/ - name: state mountPath: /var/aws-iam-authenticator/ - name: output mountPath: /etc/kubernetes/aws-iam-authenticator/ volumes: - name: config configMap: name: aws-iam-authenticator - name: output hostPath: path: /etc/kubernetes/aws-iam-authenticator/ - name: state hostPath: path: /var/aws-iam-authenticator/ |
Some important things to verify are that your clusterID matches the same string you used to pre-generate the config. Also ensure you have server args kubeconfig-pregenerated=true on the container config otherwise the daemonset will attempt to generate on every restart.
The configmap shows I’m simply mapping my [email protected] IAM user account in AWS to the built-in system:masters Kubernetes group. Change this to whatever IAM user you want to test with.
This will let you test the end to end process using your own account. In future iterations you can remove this config and use another option like mapping AWS IAM roles to specific Kubernetes groups.
That should be all of the server side configuration now done. The last step is a kubeconfig file to use locally. Here’s an example.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
apiVersion: v1 clusters: - cluster: certificate-authority: "ca.pem" server: https://dns_of_my_kubernetes_apiserver name: environment_name.region.environment_type contexts: - context: cluster: environment_name.region.environment_type namespace: default name: default current-context: default kind: Config preferences: {} users: - name: [email protected] user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 command: aws-iam-authenticator args: - "token" - "-i" - "environment_name.region.environment_type" |
For this config to work you need the ca.pem for the cluster in the same directory as the kubeconfig file. You’ll also need your ip address whitelisted to the API Server security group. Also, make sure your user matches what’s in the configmap mapping. Finally, the last line of the config needs to match the clusterID specified.
I’ve automated the creation of kubeconfig files with a simple script that whitelists, copies down the ca.pem from the cluster and generates the kubeconfig file and then prints an alias. So we just change directory into the environment dir in our Terraform repo and run the script and it does everything.
1 2 |
$ beconnect.py -k alias k='kubectl --kubeconfig $(pwd)/kubeconfig' |
Copying and pasting that alias then sets all k commands to use the kubeconfig file. I find this is quite handy for opening up new shell sessions which can be quickly configured with a single command in the relevant environment directory to enable kubectl access.
When all goes well you can run.
1 2 3 4 5 6 7 8 |
$ alias k='kubectl --kubeconfig $(pwd)/kubeconfig' $ k get no NAME STATUS ROLES AGE VERSION xxxxxxxxxxxxxx.us-east-2.compute.internal Ready,SchedulingDisabled master 2d v1.11.8 xxxxxxxxxxxxx.us-east-2.compute.internal Ready <none> 2d v1.11.8 xxxxxxxxxxxxx.us-east-2.compute.internal Ready <none> 2d v1.11.8 xxxxxxxxxxxxx.us-east-2.compute.internal Ready,SchedulingDisabled master 2d v1.11.8 xxxxxxxxxxxxx.us-east-2.compute.internal Ready <none> 2d v1.11.8 |
And if you check the logs on the master you’ll see.
1 2 3 4 5 6 |
$ k -n kube-system logs aws-iam-authenticator-fsr6k time="2019-03-13T09:20:00Z" level=info msg="mapping IAM user" groups="[system:masters]" user="arn:aws:iam::accountid:user/[email protected]" username=steven.acreman@kubedex.com time="2019-03-13T09:20:00Z" level=info msg="loaded existing keypair" certPath=/var/aws-iam-authenticator/cert.pem keyPath=/var/aws-iam-authenticator/key.pem time="2019-03-13T09:20:00Z" level=info msg="listening on https://127.0.0.1:21362/authenticate" time="2019-03-13T09:20:00Z" level=info msg="reconfigure your apiserver with `--authentication-token-webhook-config-file=/etc/kubernetes/heptio-authenticator-aws/kubeconfig.yaml` to enable (assuming default hostPath mounts)" time="2019-03-16T06:55:58Z" level=info msg="access granted" arn="arn:aws:iam::accountid:user/[email protected]" client="127.0.0.1:45618" groups="[system:masters]" method=POST path=/authenticate uid="heptio-authenticator-aws:xxxx" username=steven.acreman@kubedex.com |
Now you’re done and you can focus on changing the settings in the configmap to iterate how users are mapped to groups.
What steps should you take when it all goes wrong and you can’t work out why? Here’s a quick summary of how I’d systematically work through from client to daemonset to see where the problem lies.
1 |
kubectl --v=10 get nodes |
I wasted a couple of days trying to work out why nothing was showing in my aws-iam-authenticator pod logs when I ran commands. The reason was that I’d forgotten to put user: into my kubeconfig context. This meant kubectl wasn’t sending requests with a bearer token. Adding –v=10 to my kubectl alias immediately showed my requests weren’t authenticating. I wish I’d have done this sooner.
1 |
aws-iam-authenticator token -i environment_name.region.environment_type |
This should print out a token. If this doesn’t work then you have a problem with your AWS IAM account permissions.
1 2 |
$ aws-iam-authenticator token -i environment_name.region.environment_type {"kind":"ExecCredential","apiVersion":"client.authentication.k8s.io/v1alpha1","spec":{},"status":{"expirationTimestamp":"2019-03-16T07:21:04Z","token":"k8s-aws-v1.somereallylongstring"}} |
Now grab that k8s-aws-v1.somereallylongstring token from the output of the last command and try to use it on the master directly against the authenticator.
1 |
curl --insecure -H 'Content-Type: application/json' -d '{"apiVersion": "authentication.k8s.io/v1","kind": "TokenReview","spec": {"token": "k8s-aws-v1.somereallylongstring"}}' https://127.0.0.1:21362/authenticate |
If successful you’ll get some output like.
1 |
{"metadata":{"creationTimestamp":null},"spec":{},"status":{"authenticated":true,"user":{"username":"[email protected]","uid":"heptio-authenticator-aws:xxxxxxxx","groups":["system:masters"]}}} |
If this works you know it’s not a token issue and you should get some kind of meaningful error to debug.
1 |
curl -k https://localhost:443/version --header "Authorization: Bearer k8s-aws-v1.somereallylongstring" |
You’ll need to use a token again like you did with the previous command but this time specified in the header. If this works you’ll get some nice json output showing you were authenticated and some server version details.
1 2 3 4 5 6 7 8 9 10 11 |
{ "major": "1", "minor": "10", "gitVersion": "v1.10.11", "gitCommit": "637c7e288581ee40ab4ca210618a89a555b6e7e9", "gitTreeState": "clean", "buildDate": "2018-11-26T14:25:46Z", "goVersion": "go1.9.3", "compiler": "gc", "platform": "linux/amd64" } |
If this doesn’t work and you get an authentication error check to make sure you’re using the correct keys for the AWS account. It’s quite easy to generate tokens using the wrong AWS keys and then wonder why the API Server 401’s.
The easiest way to do this is to SSH in to each master and run the docker logs command to pull back the logs for the aws-iam-authenticator
1 2 3 4 5 6 7 8 9 10 |
root@master-i-03664643a8188a562:~ [0]# docker ps | grep iam fed7c23df3e9 gcr.io/heptio-images/authenticator "/heptio-authenticat…" 2 days ago Up 2 days k8s_aws-iam-authenticator_aws-iam-authenticator-zhb4j_kube-system_b1658ee2-4576-11e9-a910-0ae7cf4614a8_0 8bd1daaaea96 k8s.gcr.io/pause-amd64:3.1 "/pause" 2 days ago Up 2 days k8s_POD_aws-iam-authenticator-zhb4j_kube-system_b1658ee2-4576-11e9-a910-0ae7cf4614a8_0 root@master-i-03664643a8188a562:~ [0]# docker logs fed7c23df3e9 time="2019-03-13T09:59:34Z" level=info msg="mapping IAM user" groups="[system:masters]" user="arn:aws:iam::accountid:user/[email protected]" username=steven.acreman@kubedex.com time="2019-03-13T09:59:34Z" level=info msg="loaded existing keypair" certPath=/var/aws-iam-authenticator/cert.pem keyPath=/var/aws-iam-authenticator/key.pem time="2019-03-13T09:59:34Z" level=info msg="listening on https://127.0.0.1:21362/authenticate" time="2019-03-13T09:59:34Z" level=info msg="reconfigure your apiserver with `--authentication-token-webhook-config-file=/etc/kubernetes/heptio-authenticator-aws/kubeconfig.yaml` to enable (assuming default hostPath mounts)" time="2019-03-16T06:51:14Z" level=warning msg="access denied" arn="arn:aws:iam::accountid:user/[email protected]" client="127.0.0.1:47846" error="ARN is not mapped: arn:aws:iam::accountid:user/[email protected]" method=POST path=/authenticate time="2019-03-16T07:17:54Z" level=info msg="access granted" arn="arn:aws:iam::accountid:user/[email protected]" client="127.0.0.1:47846" groups="[system:masters]" method=POST path=/authenticate uid="heptio-authenticator-aws:xxxx" username=steven.acreman@kubedex.com |
Here we can see an unsuccessful login attempt by [email protected] as that user isn’t mapped followed by a successful login.
Unfortunately the AWS IAM Authenticator binary running the daemonset container doesn’t automatically pick up configmap changes. You’ll need to orchestrate a container restart when this changes.
There’s a cool kubectl subcommand called auth can-i that you can use to verify permissions for your user.
1 2 |
$ kubectl auth can-i create pods yes |
As you can see from the API Server configuration posted above I chose to also enable the audit log. It’s useful to check /var/log/apiserver/audit.log when very weird things are happening. I’ve not yet looked into tuning the policy to highlight errors more visibly but it’s on the backlog as a task.
You need to be running version 1.10.x or higher for both of these. Run kubectl version to check.
I’ve not gone into a lot of detail surrounding the advanced mapping of users. At work we’ve moved to using a single AWS account solely for IAM identity purposes. Those single user accounts then assume role into other accounts for different environment types. We’re still in the process of rolling this out fully so I’ve not spent a massive amount of time working with AWS role mappings to custom Kubernetes groups.
Our goal is to granularly define what groups of users can access which types of clusters with well defined RBAC permissions and lock certain groups down to only certain namespaces. If there’s interest I’ll do a follow up blog when this work is complete.
Hopefully this is useful for anyone using the AWS IAM Authenticator in their own custom Kubernetes clusters in AWS. As always post any corrections or questions below and I’ll try to answer.
There have been many comparisons done between these cloud hosted Kubernetes providers already. However, probably none…
Tell us about a new Kubernetes application
Never miss a thing! Sign up for our newsletter to stay updated.
Discover and learn about everything Kubernetes