Failed to bring up Worker Plane

I’m trying to connect external vps to create a cluster. But whatever host I connect to, I get the following error:
[workerPlane] Failed to bring up Worker Plane: [Failed to verify healthcheck: Failed to check http://localhost:10248/healthz for service [kubelet] on host [xxx.xxx.xxx.xxx]: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused, log: + umount ....
and

connection refused, log: I1107 20:02:46.720393 15638 aws.go:1149] Zone not specified in configuration file; querying AWS metadata service]

It is very strange that there are almost no similar messages on the Internet, and there are no instructions about it anymore. It’s like a verdict, why nothing ever works the first time ))

I need your help. How do you know what is happening?

Thanks!

Please supply output from docker ps -a and docker logs kubelet 2>&1 as this will give more info to investigate.

I don’t know what “aws” has to do with me not using this provider.
???

This page doesn’t work:
https://rancher.com/docs/rancher/v2.x/en/admin-settings/rke-metadata/

I use the vps (Virtual machines hosted by an infrastructure provider) for nodes, maybe I need to use special instructions?

If you are running AWS EC2 instances, and you have configured your cluster with the Amazon cloud provider (in cluster options), the nodes must be able to reach EC2 metadata. If the “external vps” is not an AWS EC2 instance, you can’t join it to this cluster because of the Amazon cloud provider configuration.

What does my cluster have to do with the Amazon? I don’t have any Amazon-related settings anywhere. Maybe there is a checkmark that I have not noticed in the cluster configuration tab?

The configuration is described here: https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/.

It is under Cluster Options but not visible on the screenshots

Here’s this screen where I point to the type of node provider.

It is not clear why the cluster is trying to connect aws? Using rancher version 2.3.2.

on rancher version 2.2.9, the cluster deployed successfully.

:no_entry:2.3.1 and 2.3.2 failed to launch the cluster from external nodes, referring to aws!:no_entry:

I’ve published a ticket on the github:

The documentation states to configure a Kubernetes cloud provider if you want to use features from the cloud you are running on. By default, this is None. Why was it configured as External?

By doing a quick check it seems you get an AWS cloudprovider config when External is picked and saved, this needs to be investigated. If you dont want a Kubernetes cloud provider, it should be set to None and you shouldn’t see the error.

1 Like

Thank you so much for your support. Launched a cluster with none setting