Kubernetes: Manage Access via Certificates
In this series, I want to explain how user authentication and authorization process is working in Kubernetes environment.
We will have a look for two configuration types for fundamental of Kubernetes RBAC.
But now we are working for several cloud platform and have a lot of authentication mechanism for security.
So that, we must create authentication logic, by our cloud providers maybe. In actually the first method is abstracting some operation from us.
Google Cloud-Based Kubernetes RBAC
At the authentication level, kubernetes has several opportunities in itself, for example, you can authenticate the user by google mail, service account (IAM), service account (share the certificate).
In this section, I will show the options for authorization and authentication.
Let’s gonna starts
In this section, I‘ve got a scenario about my team. I have a QA Engineering team and for development of this team and separating their deployment I’m creating a namespace.
kubectl create namespace qa-team
and then I just give only pod creation access for this group.
That’s the policy
- apiGroups: [""]
verbs: ["get", "watch", "list", "create"]
After created this policy, we will bind that to a user at the Kubernetes Level we have the RoleBinding.
By the way we have another option at google cloud environment
- kind: User
We can attach Kubernetes policy to users at google cloud level via Gmail or ServiceAccount(IAM), shown as below.
- kind: User
After all those things we share the certification about our Kubernetes Cluster to the QA Team, they will run this out ;
gcloud container clusters get-credentials $CLUSTER_ID — zone $ZONE
When they run the query for unprivileged resources, kubernetes will return this
I want to fetch a list of namespaces and pods in the default namespace.
$ kubectl get nsError from server (Forbidden): namespaces is forbidden: User "firstname.lastname@example.org" cannot list namespaces at the cluster scope: Required "container.namespaces.list" permission.$ kubectl get poError from server (Forbidden): pods is forbidden: User "email@example.com" cannot list pods in the namespace "default": Required "container.pods.list" permission.
But when I run the command for kubernetes level according to my policies, that’s the bingo shown as below,
$ kubectl get po -n qa-team
NAME READY STATUS RESTARTS AGE
selenium-0 1/1 Running 0 10d
That’s the bingo,
This scenario depends on the GoogCloud, we must think Cloud Agnostic always.
Let’s look to the cluster created by kops and managed by us in AWS
What is Kops?
- Kops is a tool for automating kubernetes installation to your cloud environment. Let’s gonna start.
- Kops always hold the state in an S3 bucket, these bucket is containing certification, credentials .. etc for your kubernetes cluster.
- Kops is a cli tool, not a managed kubernetes Paas serviced by your cloud providers. So that, you have the responsibility of your kubernetes cluster at platform and infrastructure level ( not hardware )
Let’s gonna start for a new cloud environment,
Hint: Kubernetes is managing all traffic between API calls with certificate created by at installation.
We will inherit these certificates talked at above and create new ones share between our users.
First export as an env vars of KOPS state bucket address :
This bucket contains base certificate about our cluster, let’s download them;
aws s3 cp s3://$KOPS_STATE_STORE/$CLUSTERNAME/pki/private/ca/$KEY
ca.keyaws s3 cp s3://$KOPS_STATE_STORE/$CLUSTERNAME/pki/issued/ca/$CERT ca.crt
Maybe you can check as manually your cert and key id for your cluster, in this chapter we are creating certificates for a specific user ;
$ openssl genrsa -out user.key 4096$ openssl req -new -key qa.key -out qa.csr -subj '/CN=qa-user-1/O=qa'$ openssl x509 -req -in qa.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out qa.crt -days 365
Create a role binding for qa user
- apiGroups: [""]
- kind: User
After creating a role binding, we must set some certificate configuration in qa member client
kubectl config set-cluster <CLUSTER_NAME> --server=https://<URL>kubectl config set-cluster <CLUSTER_NAME> --certificate-authority=ca.crtkubectl config set-credentials viewer --client-key=user_viewer.key --client-certificate=qa.crt
kubectl config set-context <CLUSTER_NAME> --user=viewer --cluster <CLUSTER_NAME>
kubectl config use-context <CLUSTER_NAME>
We’ve set the qa team user certification and after all those things we must set the context qa team client member (developer machine, CI machine .. etc)
After these processes, we will get resources only given namespace with allowed verbs of kubernetes.
I have shared script that is responsible to generate user certificates kubeconfig files there is link: https://github.com/WoodProgrammer/k8s-config-generator
We have a kubernetes cluster, where ever we have and also we have clients,as you can see at the above we have several opportunities at google cloud, with fully managed cluster created with kops we have k8s service accounts .