Importing new k3s cluster into Rancher 2.6.3

Hi

I installed successfully rancher (latest) on a local k3s cluster (local) via helm.
So far so good…

Then, I installed a fresh k3s cluster with 3 nodes successfully again.
Those two clusters are workign basically fine.

Now, I wanted to import the fresh installed k3s cluster into rancher.

I used the command (because of self-signed ssl):

curl --insecure -sfL https://rancher.arahome.ml/v3/import/rkw7wmrkl9j6hvsv6rk729gwwbbkvbnvbj8tb8qzz674kwxz5j6g77_c-m-7z8x4tvn.yaml | kubectl apply -f -

I also created the cluster-admin-rolebinding user “default” equal to the “.kube/config” file.

kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user default

kubectl logs -n cattle-system cattle-cluster-agent-577dd689c-vq5sl


ERROR: https://rancher.dom.com/ping is not accessible (Failed to connect to rancher.dom.com port 443: Connection timed out)

Checked the rancher cluster (local) if 443 is available

kubectl get svc -A --kubeconfig=“.kube/config.rancher” |grep traefik
kube-system traefik LoadBalancer 10.43.97.144 192.168.2.161,192.168.2.171,192.168.2.172 80:30091/TCP,443:30813/TCP 4d22h

What I’m missing here?

Thx ara

In addition, I have fully TCP/IP connectivity on all nodes…

I noticed a discrepancy between your Rancher server URL and the site that the imported cluster is trying to ping. Those URLs need to be the same. Can you go into the global settings and make sure server-url matches where Rancher is actually running?

Got into the same issue again but now I really have found a solution:

But why does that the trick?
Solved this by editing the deployment and changing the DNS policy from ClusterFirst to Default:

kubectl edit deployment cattle-cluster-agent -n cattle-system

dnsPolicy: Default

It works now after that change but I don’t get it :slight_smile: