How to edit cluster to add a cloud_provider

We have a running cluster created with rancher and now we need to enable vsphere as cloud provider for storage.
In the “Kubernetes Options” the Cloud Provider is not editable.
Are we supposed to modify the yaml by hand? Is this the right approach?

rancher: v2.3.3
Kubernetes: v1.16.7-rancher1-1

Thanks

2 Likes

I also have the same problem, any solutions?image (1)

I got an answer in slack, we proceeded editing the yaml manually

Can you copy paste what you have changed?

:frowning:
Just deleted the cluster after seeing this in RKE docs:

After you launch the cluster, you cannot change your network provider. Therefore, choose which network provider you want to use carefully, as Kubernetes doesn’t allow switching between network providers. Once a cluster is created with a network provider, changing network providers would require you tear down the entire cluster and all its applications.

Hi,

this is a sample config:

docker_root_dir: /var/lib/docker
enable_cluster_alerting: false
enable_cluster_monitoring: false
enable_network_policy: false
local_cluster_auth_endpoint:
  enabled: true
name: my-cluster
rancher_kubernetes_engine_config:
  addon_job_timeout: 30
  authentication:
    strategy: x509
  cloud_provider:
    name: vsphere
    vsphereCloudProvider:
      global:
        insecure-flag: true
        soap-roundtrip-count: 0
      virtual_center:
        xxx.xxx.xxx.xxx:
          user: xxxxxxx@vsphere.local
          password: xxxxxx
          port: 443
          datacenters: MY-DATACENTER
      workspace:
        server: xxx.xxx.xxx.xxx
        folder: MYFOLDER
        default-datastore: YYYYY/XXXXSX
        datacenter: MY-DATACENTER
        resourcepool-path: POOL/Resources
  ignore_docker_version: true
  ingress:
    provider: none
  kubernetes_version: v1.16.8-rancher1-2
  monitoring:
    provider: metrics-server
  network:
    plugin: calico
  services:
    etcd:
      backup_config:
        enabled: true
        interval_hours: 12
        retention: 6
        s3_backup_config:
          access_key: xxxx
          bucket_name: xxxx
          endpoint: xxxx
          secret_key: xxx
        safe_timestamp: false
      creation: 12h
      extra_args:
        election-timeout: 5000
        heartbeat-interval: 500
      gid: 0
      retention: 72h
      snapshot: false
      uid: 0
    kube_api:
      always_pull_images: false
      pod_security_policy: false
      service_cluster_ip_range: xxx.xxx.xxx.xxx/16
      service_node_port_range: 30000-32767
    kube-controller:
      cluster_cidr: xxx.xxx.xxx.xxx/16
      service_cluster_ip_range: xxx.xxx.xxx.xxx/16
    kubelet:
      cluster_dns_server: xxx.xxx.xxx.xxx
      cluster_domain: cluster.local
      fail_swap_on: false
      generate_serving_certificate: false
  ssh_agent_auth: false
windows_prefered_cluster: false

However we had some problem activating the provider in an existing cluster: when Rancher started updating nodes, the nodes changed their name from the name we provided them when cluster was first created (–node-name option) to VM’s hostname, so that Rancher itself was unable to recognize them and it lost cluster control (that’s why I posted this question).

So, to add the provider, we had to create a new cluster and migrate workloads to it.

If you didn’t provide custom node names, or you know how to handle their change, you can try adding the provider config. Once added, you also have to patch nodes K8s level to add to their description the vSphere ID, as suggested by this article, with something like this:

kubectl patch node $NODE_NAME -p "{\"spec\":{\"providerID\":\"vsphere://$VM_UUID\"}}"

Hope this help.

Thanks @paolo.morgano , really appreciated.

For me deleting and creating a new cluster also didn’t work for some reason, so i’ve deleted rancher altogether, and right now installing it from scratch.

Best,
Alp

For me deleting and creating a new cluster also didn’t work for some reason

Had the same issue in a brand new Rancher installation, first attempt my cluster had no provider after fresh cluster setup using yaml config during creation, so I had to add it as described in previous post. The second time I tried to create a new cluster, it worked out of the box. I suspect some indentation issue in the provider’s mode in my yaml when I copy-pasted it in the web UI the first time.