Not able to added node in existing cluster

INFO[0008] [sync] Syncing nodes Labels and Taints
FATA[0020] [ “192.11.1.1” not found]
while running rke update command

DEBU[0085] [hosts] Can’t find node by name

Please share RKE version used, cluster.yml used, full log and exact command used.

INFO[0000] Running RKE version: v1.0.8
INFO[0000] Initiating Kubernetes cluster
INFO[0000] [certificates] Generating admin certificates and kubeconfig
INFO[0000] Successfully Deployed state file at [./rancher-cluster.rkestate]
INFO[0000] Building Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [192.20.1.141]
INFO[0000] [dialer] Setup tunnel for host [192.20.5.104]
INFO[0000] [dialer] Setup tunnel for host [192.20.5.107]
INFO[0000] [dialer] Setup tunnel for host [192.20.5.110]
INFO[0000] [dialer] Setup tunnel for host [192.20.5.106]
INFO[0000] [dialer] Setup tunnel for host [192.20.1.139]
INFO[0000] [dialer] Setup tunnel for host [192.20.1.146]
INFO[0000] [dialer] Setup tunnel for host [192.20.1.138]
INFO[0000] [dialer] Setup tunnel for host [192.20.1.140]
INFO[0000] [dialer] Setup tunnel for host [192.20.1.143]
INFO[0000] [dialer] Setup tunnel for host [192.20.5.105]
INFO[0000] [network] No hosts added existing cluster, skipping port check
INFO[0000] [certificates] Deploying kubernetes certificates to Cluster nodes
INFO[0000] Checking if container [cert-deployer] is running on host [192.20.1.138], try #1
INFO[0000] Checking if container [cert-deployer] is running on host [192.20.5.105], try #1
INFO[0000] Checking if container [cert-deployer] is running on host [192.20.1.140], try #1
INFO[0000] Checking if container [cert-deployer] is running on host [192.20.1.141], try #1
INFO[0000] Checking if container [cert-deployer] is running on host [192.20.1.139], try #1
INFO[0000] Checking if container [cert-deployer] is running on host [192.20.5.106], try #1
INFO[0000] Checking if container [cert-deployer] is running on host [192.20.5.110], try #1
INFO[0000] Checking if container [cert-deployer] is running on host [192.20.5.107], try #1
INFO[0000] Checking if container [cert-deployer] is running on host [192.20.1.146], try #1
INFO[0000] Checking if container [cert-deployer] is running on host [192.20.5.104], try #1
INFO[0000] Checking if container [cert-deployer] is running on host [192.20.1.143], try #1
INFO[0000] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.1.139]
INFO[0000] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.1.143]
INFO[0000] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.1.146]
INFO[0000] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.1.140]
INFO[0000] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.5.110]
INFO[0000] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.1.141]
INFO[0000] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.1.138]
INFO[0000] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.5.107]
INFO[0000] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.5.104]
INFO[0000] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.5.106]
INFO[0000] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.5.105]
INFO[0000] Starting container [cert-deployer] on host [192.20.1.146], try #1
INFO[0000] Starting container [cert-deployer] on host [192.20.5.110], try #1
INFO[0000] Starting container [cert-deployer] on host [192.20.5.107], try #1
INFO[0000] Starting container [cert-deployer] on host [192.20.1.139], try #1
INFO[0000] Starting container [cert-deployer] on host [192.20.1.141], try #1
INFO[0000] Starting container [cert-deployer] on host [192.20.5.104], try #1
INFO[0000] Starting container [cert-deployer] on host [192.20.5.106], try #1
INFO[0000] Starting container [cert-deployer] on host [192.20.1.140], try #1
INFO[0000] Starting container [cert-deployer] on host [192.20.5.105], try #1
INFO[0000] Starting container [cert-deployer] on host [192.20.1.143], try #1
INFO[0000] Starting container [cert-deployer] on host [192.20.1.138], try #1
INFO[0001] Checking if container [cert-deployer] is running on host [192.20.1.139], try #1
INFO[0001] Checking if container [cert-deployer] is running on host [192.20.1.140], try #1
INFO[0001] Checking if container [cert-deployer] is running on host [192.20.1.141], try #1
INFO[0001] Checking if container [cert-deployer] is running on host [192.20.1.146], try #1
INFO[0001] Checking if container [cert-deployer] is running on host [192.20.5.107], try #1
INFO[0001] Checking if container [cert-deployer] is running on host [192.20.5.106], try #1
INFO[0001] Checking if container [cert-deployer] is running on host [192.20.5.104], try #1
INFO[0001] Checking if container [cert-deployer] is running on host [192.20.5.110], try #1
INFO[0001] Checking if container [cert-deployer] is running on host [192.20.1.138], try #1
INFO[0001] Checking if container [cert-deployer] is running on host [192.20.5.105], try #1
INFO[0001] Checking if container [cert-deployer] is running on host [192.20.1.143], try #1
INFO[0006] Checking if container [cert-deployer] is running on host [192.20.1.139], try #1
INFO[0006] Removing container [cert-deployer] on host [192.20.1.139], try #1
INFO[0006] Checking if container [cert-deployer] is running on host [192.20.1.140], try #1
INFO[0006] Checking if container [cert-deployer] is running on host [192.20.1.146], try #1
INFO[0006] Removing container [cert-deployer] on host [192.20.1.146], try #1
INFO[0006] Removing container [cert-deployer] on host [192.20.1.140], try #1
INFO[0006] Checking if container [cert-deployer] is running on host [192.20.1.141], try #1
INFO[0006] Removing container [cert-deployer] on host [192.20.1.141], try #1
INFO[0006] Checking if container [cert-deployer] is running on host [192.20.5.107], try #1
INFO[0006] Checking if container [cert-deployer] is running on host [192.20.5.106], try #1
INFO[0006] Checking if container [cert-deployer] is running on host [192.20.5.110], try #1
INFO[0006] Checking if container [cert-deployer] is running on host [192.20.5.104], try #1
INFO[0006] Removing container [cert-deployer] on host [192.20.5.110], try #1
INFO[0006] Removing container [cert-deployer] on host [192.20.5.107], try #1
INFO[0006] Removing container [cert-deployer] on host [192.20.5.106], try #1
INFO[0006] Removing container [cert-deployer] on host [192.20.5.104], try #1
INFO[0006] Checking if container [cert-deployer] is running on host [192.20.1.138], try #1
INFO[0006] Checking if container [cert-deployer] is running on host [192.20.5.105], try #1
INFO[0006] Removing container [cert-deployer] on host [192.20.1.138], try #1
INFO[0006] Checking if container [cert-deployer] is running on host [192.20.1.143], try #1
INFO[0006] Removing container [cert-deployer] on host [192.20.1.143], try #1
INFO[0006] Removing container [cert-deployer] on host [192.20.5.105], try #1
INFO[0006] [reconcile] Rebuilding and updating local kube config
INFO[0006] Successfully Deployed local admin kubeconfig at [./kube_config_rancher-cluster.yml]
INFO[0006] [reconcile] host [192.20.1.146] is active master on the cluster
INFO[0006] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
INFO[0006] [reconcile] Reconciling cluster state
INFO[0006] [reconcile] Check etcd hosts to be deleted
INFO[0006] [reconcile] Check etcd hosts to be added
INFO[0006] [reconcile] Rebuilding and updating local kube config
INFO[0006] Successfully Deployed local admin kubeconfig at [./kube_config_rancher-cluster.yml]
INFO[0006] [reconcile] host [192.20.1.146] is active master on the cluster
INFO[0006] [reconcile] Reconciled cluster state successfully
INFO[0006] Pre-pulling kubernetes images
INFO[0006] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.20.1.146]
INFO[0006] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.20.1.143]
INFO[0006] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.20.1.139]
INFO[0006] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.20.1.138]
INFO[0006] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.20.1.140]
INFO[0006] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.20.1.141]
INFO[0006] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.20.5.110]
INFO[0006] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.20.5.104]
INFO[0006] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.20.5.106]
INFO[0006] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.20.5.107]
INFO[0006] Image [rancher/hyperkube:v1.17.5-rancher1] exists on host [192.20.5.105]
INFO[0006] Kubernetes images pulled successfully
INFO[0006] [etcd] Building up etcd plane…
INFO[0006] [etcd] Successfully started etcd plane… Checking etcd cluster health
INFO[0006] [authz] Creating rke-job-deployer ServiceAccount
INFO[0006] [authz] rke-job-deployer ServiceAccount created successfully
INFO[0006] [authz] Creating system:node ClusterRoleBinding
INFO[0006] [authz] system:node ClusterRoleBinding created successfully
INFO[0006] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding
INFO[0006] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully
INFO[0006] Successfully Deployed state file at [./rancher-cluster.rkestate]
INFO[0006] [state] Saving full cluster state to Kubernetes
INFO[0007] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: cluster-state
INFO[0007] [worker] Building up Worker Plane…
INFO[0007] [worker] Successfully started Worker Plane…
INFO[0007] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.1.146]
INFO[0007] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.1.141]
INFO[0007] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.1.140]
INFO[0007] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.1.138]
INFO[0007] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.1.139]
INFO[0007] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.1.143]
INFO[0007] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.5.110]
INFO[0007] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.5.104]
INFO[0007] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.5.106]
INFO[0007] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.5.105]
INFO[0007] Image [rancher/rke-tools:v0.1.56] exists on host [192.20.5.107]
INFO[0007] Starting container [rke-log-cleaner] on host [192.20.1.143], try #1
INFO[0007] Starting container [rke-log-cleaner] on host [192.20.1.139], try #1
INFO[0007] Starting container [rke-log-cleaner] on host [192.20.1.146], try #1
INFO[0007] Starting container [rke-log-cleaner] on host [192.20.5.104], try #1
INFO[0007] Starting container [rke-log-cleaner] on host [192.20.5.110], try #1
INFO[0007] Starting container [rke-log-cleaner] on host [192.20.1.140], try #1
INFO[0007] Starting container [rke-log-cleaner] on host [192.20.1.138], try #1
INFO[0007] Starting container [rke-log-cleaner] on host [192.20.1.141], try #1
INFO[0007] Starting container [rke-log-cleaner] on host [192.20.5.105], try #1
INFO[0007] Starting container [rke-log-cleaner] on host [192.20.5.106], try #1
INFO[0007] Starting container [rke-log-cleaner] on host [192.20.5.107], try #1
INFO[0007] [cleanup] Successfully started [rke-log-cleaner] container on host [192.20.1.139]
INFO[0007] Removing container [rke-log-cleaner] on host [192.20.1.139], try #1
INFO[0007] [cleanup] Successfully started [rke-log-cleaner] container on host [192.20.1.140]
INFO[0007] Removing container [rke-log-cleaner] on host [192.20.1.140], try #1
INFO[0007] [cleanup] Successfully started [rke-log-cleaner] container on host [192.20.1.143]
INFO[0007] Removing container [rke-log-cleaner] on host [192.20.1.143], try #1
INFO[0007] [cleanup] Successfully started [rke-log-cleaner] container on host [192.20.5.110]
INFO[0007] Removing container [rke-log-cleaner] on host [192.20.5.110], try #1
INFO[0007] [cleanup] Successfully started [rke-log-cleaner] container on host [192.20.1.138]
INFO[0007] [cleanup] Successfully started [rke-log-cleaner] container on host [192.20.1.146]
INFO[0007] Removing container [rke-log-cleaner] on host [192.20.1.138], try #1
INFO[0007] Removing container [rke-log-cleaner] on host [192.20.1.146], try #1
INFO[0007] [cleanup] Successfully started [rke-log-cleaner] container on host [192.20.1.141]
INFO[0007] Removing container [rke-log-cleaner] on host [192.20.1.141], try #1
INFO[0007] [cleanup] Successfully started [rke-log-cleaner] container on host [192.20.5.105]
INFO[0007] [cleanup] Successfully started [rke-log-cleaner] container on host [192.20.5.106]
INFO[0007] [cleanup] Successfully started [rke-log-cleaner] container on host [192.20.5.104]
INFO[0007] Removing container [rke-log-cleaner] on host [192.20.5.105], try #1
INFO[0007] Removing container [rke-log-cleaner] on host [192.20.5.106], try #1
INFO[0007] Removing container [rke-log-cleaner] on host [192.20.5.104], try #1
INFO[0007] [remove/rke-log-cleaner] Successfully removed container on host [192.20.1.139]
INFO[0007] [remove/rke-log-cleaner] Successfully removed container on host [192.20.5.110]
INFO[0007] [remove/rke-log-cleaner] Successfully removed container on host [192.20.1.143]
INFO[0007] [remove/rke-log-cleaner] Successfully removed container on host [192.20.1.146]
INFO[0007] [remove/rke-log-cleaner] Successfully removed container on host [192.20.1.140]
INFO[0007] [cleanup] Successfully started [rke-log-cleaner] container on host [192.20.5.107]
INFO[0007] Removing container [rke-log-cleaner] on host [192.20.5.107], try #1
INFO[0007] [remove/rke-log-cleaner] Successfully removed container on host [192.20.1.138]
INFO[0007] [remove/rke-log-cleaner] Successfully removed container on host [192.20.5.106]
INFO[0007] [remove/rke-log-cleaner] Successfully removed container on host [192.20.1.141]
INFO[0007] [remove/rke-log-cleaner] Successfully removed container on host [192.20.5.104]
INFO[0007] [remove/rke-log-cleaner] Successfully removed container on host [192.20.5.105]
INFO[0008] [remove/rke-log-cleaner] Successfully removed container on host [192.20.5.107]
INFO[0008] [sync] Syncing nodes Labels and Taints
FATA[0020] [ “vcserver211” not found]

Please share the contents of rancher-cluster.yml and the output of kubectl get nodes

nodes:

  • address: 192.2.1.6
    port: “22”
    internal_address: “192.2.1.6”
    role:
    • controlplane
    • etcd
      hostname_override: “test1302”
      user: test9
      docker_socket: /var/run/docker.sock
      ssh_key: “”
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert: “”
      ssh_cert_path: “”
      labels:
      app: “test2”
      taints: []
  • address: 192.2.44.92
    port: “22”
    internal_address: “192.2.44.92”
    role:
    • controlplane
    • etcd
      hostname_override: “test1303”
      user: test9
      docker_socket: /var/run/docker.sock
      ssh_key: “”
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert: “”
      ssh_cert_path: “”
      labels:
      app: “test2”
      env: prod
      taints: []
  • address: 192.2.1.143
    port: “22”
    internal_address: “192.2.1.143”
    role:
    • controlplane
    • etcd
      hostname_override: “test1304”
      user: test9
      docker_socket: /var/run/docker.sock
      ssh_key: “”
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert: “”
      ssh_cert_path: “”
      labels:
      app: “test2”
      taints: []
      #prod worker
  • address: 192.2.1.138
    port: “22”
    internal_address: “192.2.1.138”
    role:
    • worker
      hostname_override: “test2305”
      user: test9
      docker_socket: /var/run/docker.sock
      ssh_key: “”
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert: “”
      ssh_cert_path: “”
      labels:
      app: “test2”
      env: pre-prod
      taints: []
  • address: 192.2.44.24
    port: “22”
    internal_address: “192.2.44.24”
    role:
    • worker
      hostname_override: “test2306”
      user: test9
      docker_socket: /var/run/docker.sock
      ssh_key: “”
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert: “”
      ssh_cert_path: “”
      labels:
      app: “test2”
      env: pre-prod
      taints: []
  • address: 192.2.1.139
    port: “22”
    internal_address: “192.2.1.139”
    role:
    • worker
      hostname_override: “test2307”
      user: test9
      docker_socket: /var/run/docker.sock
      ssh_key: “”
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert: “”
      ssh_cert_path: “”
      labels:
      app: “test2”
      env: pre-prod
      taints: []
  • address: 192.2.44.1925
    port: “22”
    internal_address: “192.2.44.25”
    role:
    • worker
      hostname_override: “test2308”
      user: test9
      docker_socket: /var/run/docker.sock
      ssh_key: “”
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert: “”
      ssh_cert_path: “”
      labels:
      app: “test2”
      env: pre-prod
      taints: []
  • address: 192.2.1.140
    port: “22”
    internal_address: “192.2.1.140”
    role:
    • worker
      hostname_override: “test2309”
      user: test9
      docker_socket: /var/run/docker.sock
      ssh_key: “”
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert: “”
      ssh_cert_path: “”
      labels:
      app: “test2”
      env: pre-prod
      taints: []
  • address: 192.2.44.1927
    port: “22”
    internal_address: “192.2.44.1927”
    role:
    • worker
      hostname_override: “test23192”
      user: test9
      docker_socket: /var/run/docker.sock
      ssh_key: “”
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert: “”
      ssh_cert_path: “”
      labels:
      app: “test2”
      env: pre-prod
      taints: []
  • address: 192.2.1.141
    port: “22”
    internal_address: “192.2.1.141”
    role:
    • worker
      hostname_override: “test2311”
      user: test9
      docker_socket: /var/run/docker.sock
      ssh_key: “”
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert: “”
      ssh_cert_path: “”
      labels:
      app: “test2”
      env: pre-prod
      taints: []
  • address: 192.2.44.1926
    port: “22”
    internal_address: “192.2.44.1926”
    role:
    • worker
      hostname_override: “test231”
      user: test9
      docker_socket: /var/run/docker.sock
      ssh_key: “”
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert: “”
      ssh_cert_path: “”
      labels:
      app: “test-eco”
      env: pre-prod
      taints: []
      services:
      etcd:
      image: “”
      extra_args: {}
      extra_binds: []
      extra_env: []
      external_urls: []
      ca_cert: “”
      cert: “”
      key: “”
      path: “”
      uid: 0
      gid: 0
      snapshot: true
      retention: “6h”
      creation: “24h”
      backup_config: null
      kube-api:
      image: “”
      extra_args: {}
      extra_binds: []
      extra_env: []
      service_cluster_ip_range: 192.0.0.0/16
      service_node_port_range: “40000-42767”
      pod_security_policy: false
      always_pull_images: false
      secrets_encryption_config: null
      audit_log: null
      admission_configuration: null
      event_rate_limit: null
      kube-controller:
      image: “”
      extra_args: {}
      extra_binds: []
      extra_env: []
      cluster_cidr: 192.0.0.0/16
      service_cluster_ip_range: 192.0.0.0/16
      scheduler:
      image: “”
      extra_args: {}
      extra_binds: []
      extra_env: []
      kubelet:
      image: “”
      extra_args: {}
      extra_binds: []
      extra_env: []
      cluster_domain: cluster.local
      infra_container_image: “”
      cluster_dns_server: 192.0.0.192
      fail_swap_on: false
      generate_serving_certificate: false
      kubeproxy:
      image: “”
      extra_args: {}
      extra_binds: []
      extra_env: []
      network:
      plugin: canal
      options: {}
      mtu: 0
      node_selector: {}
      authentication:
      strategy: x509
      sans: []
      webhook: null
      addons: “”
      addons_include: []
      system_images:
      etcd: rancher/coreos-etcd:v3.4.3-rancher1
      alpine: rancher/rke-tools:v0.1.56
      nginx_proxy: rancher/rke-tools:v0.1.56
      cert_downloader: rancher/rke-tools:v0.1.56
      kubernetes_services_sidecar: rancher/rke-tools:v0.1.56
      kubedns: rancher/k8s-dns-kube-dns:1.15.0
      dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.0
      kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.0
      kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
      coredns: rancher/coredns-coredns:1.6.5
      coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
      nodelocal: rancher/k8s-dns-node-cache:1.15.7
      kubernetes: rancher/hyperkube:v1.17.5-rancher1
      flannel: rancher/coreos-flannel:v0.11.0-rancher1
      flannel_cni: rancher/flannel-cni:v0.3.0-rancher5
      calico_node: rancher/calico-node:v3.13.0
      calico_cni: rancher/calico-cni:v3.13.0
      calico_controllers: rancher/calico-kube-controllers:v3.13.0
      calico_ctl: rancher/calico-ctl:v2.0.0
      calico_flexvol: rancher/calico-pod2daemon-flexvol:v3.13.0
      canal_node: rancher/calico-node:v3.13.0
      canal_cni: rancher/calico-cni:v3.13.0
      canal_flannel: rancher/coreos-flannel:v0.11.0
      canal_flexvol: rancher/calico-pod2daemon-flexvol:v3.13.0
      weave_node: weaveworks/weave-kube:2.5.2
      weave_cni: weaveworks/weave-npc:2.5.2
      pod_infra_container: rancher/pause:3.1
      ingress: rancher/nginx-ingress-controller:nginx-0.25.1-rancher1
      ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1
      metrics_server: rancher/metrics-server:v0.3.6
      windows_pod_infra_container: rancher/kubelet-pause:v0.1.3
      ssh_key_path: ~/.ssh/id_rsa
      ssh_cert_path: “”
      ssh_agent_auth: false
      authorization:
      mode: rbac
      options: {}
      ignore_docker_version: false
      kubernetes_version: “v1.17.5-rancher1”
      private_registries:
    • url: “192.168.2.2:9797”
      user: “test”
      password: “test”
      is_default: true
      ingress:
      provider: “nginx”
      options:
      use-forwarded-headers: true
      map-hash-bucket-size: “18”
      ssl-protocols: TLSv1.2
      node_selector:
      app: “test-eco”
      extra_args: {}
      dns_policy: “”
      extra_envs: []
      extra_volumes: []
      extra_volume_mounts: []
      cluster_name: “local”
      cloud_provider:
      name: “”
      prefix_path: “/opt/rke”
      addon_job_timeout: 30
      bastion_host:
      address: “”
      port: “”
      user: “”
      ssh_key: “”
      ssh_key_path: “”
      ssh_cert: “”
      ssh_cert_path: “”
      monitoring:
      provider: “”
      options: {}
      node_selector: {}
      restore:
      restore: false
      snapshot_name: “”
      dns:
      provider: “coredns”

kubectl get nodes all our ips will be print