Imported k3s Cluster remains in "Pending" state

I have 2 nodes:

  1. k3s Cluster with Rancher successfully installed on it.
  2. k3s Cluster with nothing installed on it.

I followed the import instructions within Rancher (including setting the ClusterRoleBinding for the admin) to import the k3s Cluster of the second node.

The issue:

This cluster is currently Pending; areas that interact directly with it will not be available until the API is ready.

This is the message that always appears after importing the k3s Cluster and it never disappears. The Cluster remains in pending state, all the time, no matter how much time passes. Which of course means, I cannot manage the cluster and it doesn’t even appear in the Rancher GUI, except this message and the cluster’s name.

Log of imported k3s Cluster:

...
...
E0430 12:03:39.847139    1230 pod_workers.go:191] Error syncing pod 5248515b-e886-42a5-9c78-745ed93d7e4e ("cattle-cluster-agent-5959b99bb8-v7bnd_cattle-system(5248515b-e886-42a5-9c78-745ed93d7e4e)"), skipping: failed to "StartContainer" for "cluster-register" with CrashLoopBackOff: "back-off 5m0s restarting failed container=cluster-register pod=cattle-cluster-agent-5959b99bb8-v7bnd_cattle-system(5248515b-e886-42a5-9c78-745ed93d7e4e)"
E0430 12:03:50.845242    1230 pod_workers.go:191] Error syncing pod 5248515b-e886-42a5-9c78-745ed93d7e4e ("cattle-cluster-agent-5959b99bb8-v7bnd_cattle-system(5248515b-e886-42a5-9c78-745ed93d7e4e)"), skipping: failed to "StartContainer" for "cluster-register" with CrashLoopBackOff: "back-off 5m0s restarting failed container=cluster-register pod=cattle-cluster-agent-5959b99bb8-v7bnd_cattle-system(5248515b-e886-42a5-9c78-745ed93d7e4e)"
I0430 12:04:02.722645    1230 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
E0430 12:04:05.846792    1230 pod_workers.go:191] Error syncing pod 5248515b-e886-42a5-9c78-745ed93d7e4e ("cattle-cluster-agent-5959b99bb8-v7bnd_cattle-system(5248515b-e886-42a5-9c78-745ed93d7e4e)"), skipping: failed to "StartContainer" for "cluster-register" with CrashLoopBackOff: "back-off 5m0s restarting failed container=cluster-register pod=cattle-cluster-agent-5959b99bb8-v7bnd_cattle-system(5248515b-e886-42a5-9c78-745ed93d7e4e)"

Log of cattle agent on imported k3s Cluster:

# kubectl logs -f cattle-cluster-agent-5959b99bb8-v7bnd -n cattle-system
INFO: Environment: CATTLE_ADDRESS=10.42.0.38 CATTLE_CA_CHECKSUM=f73bdedac2dab695146d5db2c77fc10553da847102c059d596a00989b06fbaf5 CATTLE_CLUSTER=true CATTLE_FEATURES= CATTLE_INTERNAL_ADDRESS= CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-5959b99bb8-v7bnd CATTLE_SERVER=https://1.2.3.4:8443
INFO: Using resolv.conf: search cattle-system.svc.cluster.local svc.cluster.local cluster.local fritz.box nameserver 10.43.0.10 options ndots:5
ERROR: https://1.2.3.4:8443/ping is not accessible (Failed to connect to 1.2.3.4 port 8443: Connection timed out)

But on the node with the imported k3s Cluster:

# curl  --insecure https://1.2.3.4:8443/ping
pong

I already started the Cluster which Rancher is on like this:

k3s server --tls-san 1.2.3.4 --tls-san domain.tld

If I did provide insufficient data for finding the issue and its solution, please tell me!
I would really appreciate assistance on this topic.

Solved it by setting up the entire server anew.
The k3s installation was partially cloned from another server, which caused issues in the internal configuration, where there were remnant addresses from the original server.