Rancher 2.6.3 created kubeconfig does not allow RBAC testing of service account

First of all sorry if the title is not as self explanatory than it should be. I’m running a Rancher HA setup that is configured with authentication via Active Directory. User login to the UI works fine.

Then I created a cluster with RKE and integrated it into Rancher while also giving an Active Directory group owner rights to the cluster. The users then use the Rancher functionality to generate a kubeconfig and creating namespaces and so on. Then we tried creating a service account with the following steps:
Creating a role with the following config:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name:  create-deployments
rules:
 - apiGroups: ["*"]
   resources: ["deployments","pods/*","services","secrets","networkpolicies.networking.k8s.io","pods"]
   verbs: ["get","list","watch","create","update","patch","apply"]

Creating the Service account with:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: azure-devops-svc

And finally create the rolebinding:

kubectl create clusterrolebinding azure-devops-role-binding-svc --clusterrole=create-deployments --serviceaccount=default:azure-devops-svc

I can see the role and the binding in the Rancher detail view of the cluster and to me everything looks good. Then I try testing the RBAC access of the service account with

kubectl auth can-i create pods --as=system:serviceaccount:default:azure-devops-svc

and get the following:

Error from server (Forbidden): {"Code":{"Code":"Forbidden","Status":403},"Message":"clusters.management.cattle.io \"c-m-jwzmtg6s\" is forbidden: User \"system:serviceaccount:default:azure-devops-svc\" cannot get resource \"clusters\" in API group \"management.cattle.io\" at the cluster scope","Cause":null,"FieldName":""} (post selfsubjectaccessreviews.authorization.k8s.io)

When I switch to the RKE generated kubeconfig it works just fine and I get:

kubectl auth can-i create pods --as=system:serviceaccount:default:azure-devops-svc
yes

Here is the Rancher generated kubeconfig (cert data and token omitted):

apiVersion: v1
kind: Config
clusters:
- name: "rocinante"
  cluster:
    server: "https://rancheradm.acme.internal.de/k8s/clusters/c-m-jwzmtg6s"
    certificate-authority-data: "xxx"

users:
- name: "rocinante"
  user:
    token: "kubeconfig-user-x5q8xwps94:xxx"


contexts:
- name: "rocinante"
  context:
    user: "rocinante"
    cluster: "rocinante"

current-context: "rocinante"

And this is the RKE generated kubeconfig (cert data and token omitted):

apiVersion: v1
kind: Config
clusters:
- cluster:
    api-version: v1
    certificate-authority-data: xxx
    server: "https://etcdnode.acme.internal.de:6443"
  name: "rocinante"
contexts:
- context:
    cluster: "rocinante"
    user: "kube-admin-rocinante"
  name: "rocinante"
current-context: "rocinante"
users:
- name: "kube-admin-rocinante"
  user:
    client-certificate-data: xxx
    client-key-data: xxx

Am I missing a configuration step in the Rancher UI or was there a wrong step in the service account creation? I’m thankful for any hint.

Ok, since this seems to be an issue that is uncommon is there a Rancher way of granting service account access to a cluster for an external tool (in my case Azure DevOps Server)?
We tried creating an AD account that serves as the service-account and creating an cluster role in rancher for it. That seem to be the way to do it, but the generated kubectl config is not working with the Azure DevOps Server (works fine with kubectl though).

The only thing I can configure in the Azure DevOps Server for the Kubernetes Service account are:

Clustername
Namespace
Server URL (usually via: kubectl config view --minify -o jsonpath={.clusters[0].cluster.server})
Secret for the service account

I really don’t know what to do here, creating an service account in the AD is no problem but how to get it to authenticate with the configuration fields provided by Azure DevOps Server?

I don’t know for certain, but the two things I’d guess is there’s a way to create an API key to use rather than the certificate used in some kubeconfig files, though I have a hazy memory that user kubeconfig files might have that instead of the cert.

The other thing I’ve noticed when going through the Rancher UI is that when I tried assigning rights to a group (FreeIPA, not AD, but that’s still LDAP) and then logged in a user that was a member of the group the user had the UI showing those rights for the user but going into the user admin page the user showed up as a normal user and I had to set properly. If you assigned privileges by group maybe try checking the user and make sure the rights are what you expect?

Sorry if you’ve tried both of those, but that’s all I can think of.

Hi, thank you for your help. The Rancher generated kubeconfig uses a bearer token. But again, the config works fine with kubectl. The user and it’s special role that I created in Rancher are working fine. I can confirm this with other external services (checkmk).
So it seems to be an issue with Azure DevOps Server and the expected return of /api/v1/nodes at least that is what I get as an error message.
Azure somehow takes the RKE generated kubeconfig which uses a fix endpoint (etcd node) and a certificate/key based authentication.
Was there a change in the authentication process of kubeconfig users between Rancher 2.5.3 (Rancher generated kubeconfigurations worked with Azure) and Rancher 2.6.3 (Now they don’t)?

I don’t want to open an issue for this, since I can not point to where the problem is and it may be a problem with Azure DevOps itself.

Have you tried asking on Rancher Slack? There’s an #azure channel there, so possibly people there would know? I find Slack gets more traffic from Rancher employees and responses than the forum.

Not yet, but I will do. Thanks for the hint!