Panic: interface conversion: reference.repository is not reference.Tagged: missing method Tag

Hello All,
I need help.
When I am doing “rke up”
then every time it stopped in bellow lines

I updated the cluster.yml , error from workstation and error from master nodes

Can anyone please tell me what I am doing wrong?
Thanks for your help

panic: interface conversion: reference.repository is not reference.Tagged: missing method Tag

INFO[0041] Pre-pulling kubernetes images
INFO[0041] Pulling image [ls] on host [192.168.0.63], try #1
INFO[0041] Pulling image [ls] on host [192.168.0.61], try #1
INFO[0041] Pulling image [ls] on host [192.168.0.64], try #1
INFO[0041] Pulling image [ls] on host [192.168.0.62], try #1
INFO[0041] Pulling image [ls] on host [192.168.0.60], try #1
INFO[0043] Pulling image [ls] on host [192.168.0.63], try #1
INFO[0043] Pulling image [ls] on host [192.168.0.64], try #1
INFO[0043] Pulling image [ls] on host [192.168.0.62], try #1
INFO[0043] Pulling image [ls] on host [192.168.0.60], try #1
INFO[0043] Pulling image [ls] on host [192.168.0.61], try #1
INFO[0047] Pulling image [ls] on host [192.168.0.63], try #1
INFO[0047] Pulling image [ls] on host [192.168.0.64], try #1
INFO[0047] Pulling image [ls] on host [192.168.0.60], try #1
INFO[0047] Pulling image [ls] on host [192.168.0.61], try #1
INFO[0047] Pulling image [ls] on host [192.168.0.62], try #1
INFO[0048] Kubernetes images pulled successfully
panic: interface conversion: reference.repository is not reference.Tagged: missing method Tag

goroutine 1 [running]:
github.com/rancher/rke/util.GetImageTagFromImage(0xc000685880, 0x2, 0xc00049b626, 0x5, 0x27c6c60, 0x0)
/go/src/github.com/rancher/rke/util/util.go:115 +0x97
github.com/rancher/rke/cluster.(*Cluster).getDefaultKubernetesServicesOptions(0xc0003f7800, 0xc00089cd80, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1176e4f, …)
/go/src/github.com/rancher/rke/cluster/plan.go:1024 +0x240
github.com/rancher/rke/cluster.(*Cluster).GetKubernetesServicesOptions(0xc0003f7800, 0xc00089cd80, 0x5, 0xc0007ba3a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, …)
/go/src/github.com/rancher/rke/cluster/plan.go:998 +0xea
github.com/rancher/rke/cluster.(*Cluster).DeployControlPlane(0xc0003f7800, 0x1b3e100, 0xc0000b8010, 0xc0007ba3a0, 0x0, 0x0, 0x19, 0xb, 0x0)
/go/src/github.com/rancher/rke/cluster/cluster.go:123 +0x201
github.com/rancher/rke/cmd.ClusterUp(0x1b3e100, 0xc0000b8010, 0x0, 0x0, 0x0, 0x0, 0x0, 0x18c0481, 0xb, 0x0, …)
/go/src/github.com/rancher/rke/cmd/up.go:195 +0xcee
github.com/rancher/rke/cmd.clusterUpFromCli(0xc00021b4a0, 0x0, 0xc00021b4a0)
/go/src/github.com/rancher/rke/cmd/up.go:302 +0x65e
github.com/urfave/cli.HandleAction(0x1656be0, 0x19790a8, 0xc00021b4a0, 0xc000287e00, 0x0)
/go/pkg/mod/github.com/urfave/cli@v1.20.0/app.go:490 +0xc8
github.com/urfave/cli.Command.Run(0x18b975a, 0x2, 0x0, 0x0, 0x0, 0x0, 0x0, 0x18cb227, 0x14, 0x0, …)
/go/pkg/mod/github.com/urfave/cli@v1.20.0/command.go:210 +0x991
github.com/urfave/cli.(*App).Run(0xc000133520, 0xc0000b0020, 0x2, 0x2, 0x0, 0x0)
/go/pkg/mod/github.com/urfave/cli@v1.20.0/app.go:255 +0x6ab
main.mainErr(0xc000218fc0, 0x1afe400)
/go/src/github.com/rancher/rke/main.go:79 +0x14b9
main.main()
/go/src/github.com/rancher/rke/main.go:24 +0x4e

my cluster.yml

[rke@localhost ~]$ cat cluster.yml

If you intened to deploy Kubernetes in an air-gapped environment,

please consult the documentation on how to configure custom RKE images.

nodes:

  • address: 192.168.0.60
    port: “22”
    internal_address: “”
    role:
    • controlplane
    • etcd
      hostname_override: “”
      user: rke
      docker_socket: /var/run/docker.sock
      ssh_key: “”
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert: “”
      ssh_cert_path: “”
      labels: {}
      taints: []
  • address: 192.168.0.61
    port: “22”
    internal_address: “”
    role:
    • controlplane
    • etcd
      hostname_override: “”
      user: rke
      docker_socket: /var/run/docker.sock
      ssh_key: “”
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert: “”
      ssh_cert_path: “”
      labels: {}
      taints: []
  • address: 192.168.0.62
    port: “22”
    internal_address: “”
    role:
    • controlplane
    • etcd
      hostname_override: “”
      user: rke
      docker_socket: /var/run/docker.sock
      ssh_key: “”
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert: “”
      ssh_cert_path: “”
      labels: {}
      taints: []
  • address: 192.168.0.63
    port: “22”
    internal_address: “”
    role:
    • worker
      hostname_override: “”
      user: rke
      docker_socket: /var/run/docker.sock
      ssh_key: “”
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert: “”
      ssh_cert_path: “”
      labels: {}
      taints: []
  • address: 192.168.0.64
    port: “22”
    internal_address: “”
    role:
    • worker
      hostname_override: “”
      user: rke
      docker_socket: /var/run/docker.sock
      ssh_key: “”
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert: “”
      ssh_cert_path: “”
      labels: {}
      taints: []
      services:
      etcd:
      image: “”
      extra_args: {}
      extra_binds: []
      extra_env: []
      win_extra_args: {}
      win_extra_binds: []
      win_extra_env: []
      external_urls: []
      ca_cert: “”
      cert: “”
      key: “”
      path: “”
      uid: 0
      gid: 0
      snapshot: null
      retention: “”
      creation: “”
      backup_config: null
      kube-api:
      image: “”
      extra_args: {}
      extra_binds: []
      extra_env: []
      win_extra_args: {}
      win_extra_binds: []
      win_extra_env: []
      service_cluster_ip_range: 10.43.0.0/16
      service_node_port_range: “”
      pod_security_policy: false
      always_pull_images: false
      secrets_encryption_config: null
      audit_log: null
      admission_configuration: null
      event_rate_limit: null
      kube-controller:
      image: “”
      extra_args: {}
      extra_binds: []
      extra_env: []
      win_extra_args: {}
      win_extra_binds: []
      win_extra_env: []
      cluster_cidr: 10.42.0.0/16
      service_cluster_ip_range: 10.43.0.0/16
      scheduler:
      image: “”
      extra_args: {}
      extra_binds: []
      extra_env: []
      win_extra_args: {}
      win_extra_binds: []
      win_extra_env: []
      kubelet:
      image: “”
      extra_args: {}
      extra_binds: []
      extra_env: []
      win_extra_args: {}
      win_extra_binds: []
      win_extra_env: []
      cluster_domain: rke
      infra_container_image: “”
      cluster_dns_server: 10.43.0.10
      fail_swap_on: false
      generate_serving_certificate: false
      kubeproxy:
      image: “”
      extra_args: {}
      extra_binds: []
      extra_env: []
      win_extra_args: {}
      win_extra_binds: []
      win_extra_env: []
      network:
      plugin: calico
      options: {}
      mtu: 0
      node_selector: {}
      update_strategy: null
      tolerations: []
      authentication:
      strategy: x509
      sans: []
      webhook: null
      addons: “”
      addons_include: []
      system_images:
      etcd: rancher/coreos-etcd:v3.4.15-rancher1
      alpine: rancher/rke-tools:v0.1.75
      nginx_proxy: rancher/rke-tools:v0.1.75
      cert_downloader: rancher/rke-tools:v0.1.75
      kubernetes_services_sidecar: rancher/rke-tools:v0.1.75
      kubedns: rancher/k8s-dns-kube-dns:1.15.2
      dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.2
      kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.2
      kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
      coredns: rancher/coredns-coredns:1.6.9
      coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
      nodelocal: rancher/k8s-dns-node-cache:1.15.7
      kubernetes: ls
      flannel: rancher/coreos-flannel:v0.12.0
      flannel_cni: rancher/flannel-cni:v0.3.0-rancher6
      calico_node: rancher/calico-node:v3.13.4
      calico_cni: rancher/calico-cni:v3.13.4
      calico_controllers: rancher/calico-kube-controllers:v3.13.4
      calico_ctl: rancher/calico-ctl:v3.13.4
      calico_flexvol: rancher/calico-pod2daemon-flexvol:v3.13.4
      canal_node: rancher/calico-node:v3.13.4
      canal_cni: rancher/calico-cni:v3.13.4
      canal_flannel: rancher/coreos-flannel:v0.12.0
      canal_flexvol: rancher/calico-pod2daemon-flexvol:v3.13.4
      weave_node: weaveworks/weave-kube:2.6.4
      weave_cni: weaveworks/weave-npc:2.6.4
      pod_infra_container: rancher/pause:3.1
      ingress: rancher/nginx-ingress-controller:nginx-0.35.0-rancher2
      ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1
      metrics_server: rancher/metrics-server:v0.3.6
      windows_pod_infra_container: rancher/kubelet-pause:v0.1.6
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert_path: “”
      ssh_agent_auth: false
      authorization:
      mode: rbac
      options: {}
      ignore_docker_version: null
      kubernetes_version: “”
      private_registries: []
      ingress:
      provider: “”
      options: {}
      node_selector: {}
      extra_args: {}
      dns_policy: “”
      extra_envs: []
      extra_volumes: []
      extra_volume_mounts: []
      update_strategy: null
      http_port: 0
      https_port: 0
      network_mode: “”
      tolerations: []
      cluster_name: “”
      cloud_provider:
      name: “”
      prefix_path: “”
      win_prefix_path: “”
      addon_job_timeout: 0
      bastion_host:
      address: “”
      port: “”
      user: “”
      ssh_key: “”
      ssh_key_path: “”
      ssh_cert: “”
      ssh_cert_path: “”
      monitoring:
      provider: “”
      options: {}
      node_selector: {}
      update_strategy: null
      replicas: null
      tolerations: []
      restore:
      restore: false
      snapshot_name: “”
      dns: null

in master node I get this

Jul 5 12:50:07 rke-master-01 dockerd-current: time=“2021-07-05T12:50:07.067500851+01:00” level=error msg=“Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n”
Jul 5 12:50:07 rke-master-01 dockerd-current: time=“2021-07-05T12:50:07.067596674+01:00” level=info msg=“Ignoring extra error returned from registry: unauthorized: authentication required”
Jul 5 12:50:07 rke-master-01 dockerd-current: time=“2021-07-05T12:50:07.067666250+01:00” level=info msg=“Translating “denied: requested access to the resource is denied” to “repository docker.io/ls not found: does not exist or no pull access””

kubernetes: ls

Not sure why the image for kubernetes is set to ls but it makes sense that RKE is trying to pull the image ls if you configure that. In general, I would step away from using system_images and remove it from cluster.yml and configure kubernetes_version based on versions that are listed when you use rke config -l -a.