Having an issue getting rancher to import the cluster

I am very new to kubernetes but not container systems in general. (I Manage multiple Cloud Foundry deployments and docker swarms)

Im trying to get Kubernetes to attach to the rancher instance I have deployed. When I run the apply to import it into rancher I get the following error

error: resource mapping not found for name: “cattle-admin-binding” namespace: “cattle-system” from “STDIN”: no matches for kind “ClusterRoleBinding” in version “rbac.authorization.k8s.io/v1beta1

Inside Kubernetes I see the cattle-node-agent start for each node but the cattle-cluster-agent fails to start.

NAMESPACE NAME READY STATUS RESTARTS AGE
cattle-system cattle-cluster-agent-7877bdf7f8-ctskp 0/1 CrashLoopBackOff 288 (2m52s ago) 24h

Checking the logs I see the following error that appears to be related.

customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:cattle-system:cattle" cannot list resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope

Is there an easy way to fix this?

1 Like

Same problem for me trying to import a running a native k8s cluster under Ubuntu 22.04 / Proxmox VMs.

Error:
resource mapping not found for name: “cattle-admin-binding” namespace: “cattle-system” from “STDIN”: no matches for kind “ClusterRoleBinding” in version “rbac.authorization.k8s.io/v1beta1

I’m running Rancher 2.4 as 2.7 isn’t compatible with Ubuntu 22.04. It throws SSL blocking errors.

I replacing v1beta with v1 in the yaml and same error.

I tried changing the ‘user-account’ on the role assignment to "kubernetes-admin that’s in my .kube/config file too.

Is it okay to just run Rancher in stand-alone Docker container with no K8x or HA? Should I move Rancher client into the K8s cluster from Docker?