Hi,
this is a sample config:
docker_root_dir: /var/lib/docker
enable_cluster_alerting: false
enable_cluster_monitoring: false
enable_network_policy: false
local_cluster_auth_endpoint:
enabled: true
name: my-cluster
rancher_kubernetes_engine_config:
addon_job_timeout: 30
authentication:
strategy: x509
cloud_provider:
name: vsphere
vsphereCloudProvider:
global:
insecure-flag: true
soap-roundtrip-count: 0
virtual_center:
xxx.xxx.xxx.xxx:
user: xxxxxxx@vsphere.local
password: xxxxxx
port: 443
datacenters: MY-DATACENTER
workspace:
server: xxx.xxx.xxx.xxx
folder: MYFOLDER
default-datastore: YYYYY/XXXXSX
datacenter: MY-DATACENTER
resourcepool-path: POOL/Resources
ignore_docker_version: true
ingress:
provider: none
kubernetes_version: v1.16.8-rancher1-2
monitoring:
provider: metrics-server
network:
plugin: calico
services:
etcd:
backup_config:
enabled: true
interval_hours: 12
retention: 6
s3_backup_config:
access_key: xxxx
bucket_name: xxxx
endpoint: xxxx
secret_key: xxx
safe_timestamp: false
creation: 12h
extra_args:
election-timeout: 5000
heartbeat-interval: 500
gid: 0
retention: 72h
snapshot: false
uid: 0
kube_api:
always_pull_images: false
pod_security_policy: false
service_cluster_ip_range: xxx.xxx.xxx.xxx/16
service_node_port_range: 30000-32767
kube-controller:
cluster_cidr: xxx.xxx.xxx.xxx/16
service_cluster_ip_range: xxx.xxx.xxx.xxx/16
kubelet:
cluster_dns_server: xxx.xxx.xxx.xxx
cluster_domain: cluster.local
fail_swap_on: false
generate_serving_certificate: false
ssh_agent_auth: false
windows_prefered_cluster: false
However we had some problem activating the provider in an existing cluster: when Rancher started updating nodes, the nodes changed their name from the name we provided them when cluster was first created (–node-name option) to VM’s hostname, so that Rancher itself was unable to recognize them and it lost cluster control (that’s why I posted this question).
So, to add the provider, we had to create a new cluster and migrate workloads to it.
If you didn’t provide custom node names, or you know how to handle their change, you can try adding the provider config. Once added, you also have to patch nodes K8s level to add to their description the vSphere ID, as suggested by this article, with something like this:
kubectl patch node $NODE_NAME -p "{\"spec\":{\"providerID\":\"vsphere://$VM_UUID\"}}"
Hope this help.