Cannot list resource "customresourcedefinitions"

Hi!

I have a bare metal k8s single node cluster and tried to install rancher via helm (had to modify the ingress route, everything else is default). But the container keeps crashing with:

2019/11/30 16:17:35 [INFO] Rancher version v2.3.3 is starting
2019/11/30 16:17:35 [INFO] Listening on /tmp/log.sock
2019/11/30 16:17:35 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:auto Embedded:false KubeConfig: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false NoCACerts:true ListenConfig:<nil> AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Features:}
2019/11/30 16:17:36 [INFO] Running in clustered mode with ID 10.244.0.76, monitoring endpoint cattle-system/rancher
panic: creating CRD store customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:default:rancher" cannot list resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope

Yaml, which created the deployment is:

# Source: rancher/templates/serviceAccount.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
  name: rancher
  labels:
    app: rancher
    chart: rancher-2.3.3
    heritage: Helm
    release: rancher
---
# Source: rancher/templates/clusterRoleBinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rancher
  labels:
    app: rancher
    chart: rancher-2.3.3
    heritage: Helm
    release: rancher
subjects:
  - kind: ServiceAccount
    name: rancher
    namespace: cattle-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
---
# Source: rancher/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: rancher
  labels:
    app: rancher
    chart: rancher-2.3.3
    heritage: Helm
    release: rancher
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
      name: http
  selector:
    app: rancher
---
# Source: rancher/templates/deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  name: rancher
  labels:
    app: rancher
    chart: rancher-2.3.3
    heritage: Helm
    release: rancher
spec:
  replicas: 3
  selector:
    matchLabels:
      app: rancher
  template:
    metadata:
      labels:
        app: rancher
        release: rancher
    spec:
      serviceAccountName: rancher
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - rancher
                topologyKey: kubernetes.io/hostname
      containers:
        - image: rancher/rancher:v2.3.3
          imagePullPolicy: IfNotPresent
          name: rancher
          ports:
            - containerPort: 80
              protocol: TCP
          args:
            # Public trusted CA - clear ca certs
            - "--no-cacerts"
            - "--http-listen-port=80"
            - "--https-listen-port=443"
            - "--add-local=auto"
          env:
            - name: CATTLE_NAMESPACE
              value: cattle-system
            - name: CATTLE_PEER_SERVICE
              value: rancher
          livenessProbe:
            httpGet:
              path: /healthz
              port: 80
            initialDelaySeconds: 60
            periodSeconds: 30
          readinessProbe:
            httpGet:
              path: /healthz
              port: 80
            initialDelaySeconds: 5
            periodSeconds: 30
          resources:
            {}
          #volumeMounts:
      #volumes:
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: rancher
  labels:
    app: rancher
    chart: rancher-2.3.3
    heritage: Helm
    release: rancher
  #annotations:
  #  nginx.ingress.kubernetes.io/ssl-redirect: "false" # turn off ssl redirect for external.
  #  nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
  #  nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
  #  nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
spec:
  entryPoints:
    - websecure
  routes:
    - match: Host(`XXX`)
      kind: Rule
      services:
        - name: whoami
          port: 80
  tls:
    certResolver: default

Any ideas how I can solve this? I know nothing (yet) about RBAC in k8s :frowning:

Thanks!

1 Like