First of all sorry if the title is not as self explanatory than it should be. I’m running a Rancher HA setup that is configured with authentication via Active Directory. User login to the UI works fine.
Then I created a cluster with RKE and integrated it into Rancher while also giving an Active Directory group owner rights to the cluster. The users then use the Rancher functionality to generate a kubeconfig and creating namespaces and so on. Then we tried creating a service account with the following steps:
Creating a role with the following config:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: create-deployments
rules:
- apiGroups: ["*"]
resources: ["deployments","pods/*","services","secrets","networkpolicies.networking.k8s.io","pods"]
verbs: ["get","list","watch","create","update","patch","apply"]
Creating the Service account with:
apiVersion: v1
kind: ServiceAccount
metadata:
name: azure-devops-svc
And finally create the rolebinding:
kubectl create clusterrolebinding azure-devops-role-binding-svc --clusterrole=create-deployments --serviceaccount=default:azure-devops-svc
I can see the role and the binding in the Rancher detail view of the cluster and to me everything looks good. Then I try testing the RBAC access of the service account with
kubectl auth can-i create pods --as=system:serviceaccount:default:azure-devops-svc
and get the following:
Error from server (Forbidden): {"Code":{"Code":"Forbidden","Status":403},"Message":"clusters.management.cattle.io \"c-m-jwzmtg6s\" is forbidden: User \"system:serviceaccount:default:azure-devops-svc\" cannot get resource \"clusters\" in API group \"management.cattle.io\" at the cluster scope","Cause":null,"FieldName":""} (post selfsubjectaccessreviews.authorization.k8s.io)
When I switch to the RKE generated kubeconfig it works just fine and I get:
kubectl auth can-i create pods --as=system:serviceaccount:default:azure-devops-svc
yes
Here is the Rancher generated kubeconfig (cert data and token omitted):
apiVersion: v1
kind: Config
clusters:
- name: "rocinante"
cluster:
server: "https://rancheradm.acme.internal.de/k8s/clusters/c-m-jwzmtg6s"
certificate-authority-data: "xxx"
users:
- name: "rocinante"
user:
token: "kubeconfig-user-x5q8xwps94:xxx"
contexts:
- name: "rocinante"
context:
user: "rocinante"
cluster: "rocinante"
current-context: "rocinante"
And this is the RKE generated kubeconfig (cert data and token omitted):
apiVersion: v1
kind: Config
clusters:
- cluster:
api-version: v1
certificate-authority-data: xxx
server: "https://etcdnode.acme.internal.de:6443"
name: "rocinante"
contexts:
- context:
cluster: "rocinante"
user: "kube-admin-rocinante"
name: "rocinante"
current-context: "rocinante"
users:
- name: "kube-admin-rocinante"
user:
client-certificate-data: xxx
client-key-data: xxx
Am I missing a configuration step in the Rancher UI or was there a wrong step in the service account creation? I’m thankful for any hint.