Kubernetes : Manage Access via Certificates
In this series, I want to explain how user authentication and authorization process is working in Kubernetes environment .
We will have a look for two configuration types for fundamental of Kubernetes RBAC .
But now we are working for several cloud platform and have a lots of authentication mechanism for security.
So that, we must create authentication logic, by the our cloud providers maybe . In accually the first method is abstracting some operation from us .
Google Cloud Based Kubernetes RBAC
At the authentication level kubernetes has several opportunity in itself for example you can auhtenticate user by google mail, service account (IAM), service account (share the certificate) .
In this section , I will show options logic for authorization and authentication .
Let’s gonna starts
In this section I ‘ve got a scenario about my team . I’ve a QA Engineering team and for development of this team and seperating their deployment I’m creating a namespace .
kubectl create namespace qa-team
For the after seperation of QA Engineering team, attemption of the my team leader , I’m just giving only pod creation access for this group .
That’s the policy
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: qa-team
name: qa-pod
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list", "create"]
After creation of policy we will bind policy and user at the Kubernetes Level we have the RoleBinding .
By the way we have another option at google cloud environment
subjects:
- kind: User
name: handsomeqateamlead@kloia.com
We can attach Kubernetes policy to users at google cloud level via Gmail or ServiceAccount(IAM) , shown as below .
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader-binding
namespace: qa-pod
subjects:
- kind: User
name: handsomeqateamlead@kloia.comroleRef:
kind: Role
name: qa-pod
apiGroup: rbac.authorization.k8s.io
After all those things we share the certification about our Kubernetes Cluster to the QA Team, they will run this out ;
gcloud container clusters get-credentials $CLUSTER_ID — zone $ZONE
-project $GCLOUD_PROJECT
When they run the query for un priveleged resources, kubernetes will return this
I want to fetch list of namespaces and pods in default namespace .
$ kubectl get nsError from server (Forbidden): namespaces is forbidden: User "handsomeqateamlead@kloia.com" cannot list namespaces at the cluster scope: Required "container.namespaces.list" permission.$ kubectl get poError from server (Forbidden): pods is forbidden: User "handsomeqateamlead@kloia.com" cannot list pods in the namespace "default": Required "container.pods.list" permission.
But when I run the command for kubernetes level according to the my policies, that’s the bingo shown as below ,
$ kubectl get po -n qa-teamNAME READY STATUS RESTARTS AGE
selenium-0 1/1 Running 0 10d
That’s the bingo,
This scenario is depend on the GoogCloud , we must think Cloud Agnostic always .
Let’s look to the cluster created by kops and managed by us in AWS
What is Kops ?
- Kops is a tool for automate kubernetes installation to the your cloud environment . Let’s gonna start .
- Kops always hold the state in a S3 bucket, these bucket is containing certification, credentials .. etc for your kubernetes cluster .
- Kops is a cli tool, not a managed kubernetes Paas serviced by your cloud providers . So that, you have responsible of your kubernetes cluster at platform and infrastrcuture level ( not hardware )
Let’s gonna start for new cloud environment ,
Hint: Kubernetes is managing all traffic between API calls with certificate created by at installation.
We will inherit this certificates talked at above and create new ones share between our users .
First export as a env vars of KOPS state bucket address :
export KOPS_STATE_STORE=s3://delikanlilar-cluster
These bucket is contain base certificate about our cluster, let’s download them;
aws s3 cp s3://$KOPS_STATE_STORE/$CLUSTERNAME/pki/private/ca/$KEY
ca.keyaws s3 cp s3://$KOPS_STATE_STORE/$CLUSTERNAME/pki/issued/ca/$CERT ca.crt
May be you can check as manually your cert and key id for your cluster , in this chapter we are creating certificates for specific user ;
$ openssl genrsa -out user.key 4096$ openssl req -new -key qa.key -out qa.csr -subj '/CN=qa-user-1/O=qa'$ openssl x509 -req -in qa.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out qa.crt -days 365
Create a rolebinding for qa user
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
namespace: qa-team
name: qa-team-pod-listrules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","list"]---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: qa-team-role-binding
subjects:
- kind: User
name: qa-user-1
roleRef:
kind: Role
name: qa-team-pod-list
apiGroup: rbac.authorization.k8s.io
After created a role binding we must set some certificate configuration in qa member client
kubectl config set-cluster <CLUSTER_NAME> --server=https://<URL>kubectl config set-cluster <CLUSTER_NAME> --certificate-authority=ca.crtkubectl config set-credentials viewer --client-key=user_viewer.key --client-certificate=qa.crt
kubectl config set-context <CLUSTER_NAME> --user=viewer --cluster <CLUSTER_NAME>
kubectl config use-context <CLUSTER_NAME>
We’ve set the qa team user certification and after all those things we must set the context qa team client member (developer machine, CI machine .. etc)
After these processes we will get resources only given namespace with allowed verbs of kubernetes .
Conclusion
We have a kubernetes cluster, where ever we have and also we have clients ,as you can see at the above we have several opportunity at google cloud, with fully managed cluster created with kops we have k8s serviceaccounts .
Bonus
You can authorize / authenticate the user access via AWS IAM user via aws-iam-authenticator in kops cluster . at the link below ,