Hi
I installed successfully rancher (latest) on a local k3s cluster (local) via helm.
So far so good…
Then, I installed a fresh k3s cluster with 3 nodes successfully again.
Those two clusters are workign basically fine.
Now, I wanted to import the fresh installed k3s cluster into rancher.
I used the command (because of self-signed ssl):
curl --insecure -sfL https://rancher.arahome.ml/v3/import/rkw7wmrkl9j6hvsv6rk729gwwbbkvbnvbj8tb8qzz674kwxz5j6g77_c-m-7z8x4tvn.yaml | kubectl apply -f -
I also created the cluster-admin-rolebinding user “default” equal to the “.kube/config” file.
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user default
kubectl logs -n cattle-system cattle-cluster-agent-577dd689c-vq5sl
…
…
ERROR: https://rancher.dom.com/ping is not accessible (Failed to connect to rancher.dom.com port 443: Connection timed out)
Checked the rancher cluster (local) if 443 is available
kubectl get svc -A --kubeconfig=“.kube/config.rancher” |grep traefik
kube-system traefik LoadBalancer 10.43.97.144 192.168.2.161 ,192.168.2.171,192.168.2.172 80:30091/TCP,443 :30813/TCP 4d22h
What I’m missing here?
Thx ara
In addition, I have fully TCP/IP connectivity on all nodes…
I noticed a discrepancy between your Rancher server URL and the site that the imported cluster is trying to ping. Those URLs need to be the same. Can you go into the global settings and make sure server-url
matches where Rancher is actually running?
Got into the same issue again but now I really have found a solution:
opened 10:36PM - 05 Nov 18 UTC
kind/bug
[zube]: Unscheduled
**What kind of request is this (question/bug/enhancement/feature request):** bug…
**Steps to reproduce (least amount of steps as possible):**
Provision new HA cluster using RKE. The cluster.yml file should only have the nodes stanza. 3 nodes with all roles assigned. Delete one of the 3 rancher pods.
```
root@massimo-server:~# kubectl delete pod/rancher-6dc68bb996-95rbw -n cattle-system
```
**Result:**
The rancher pod is recreated due to the ReplicaSet but the cattle cluster agent fails and goes into CrashLoopBackOff state.
```
root@massimo-server:~# kubectl logs --follow \
pod/cattle-cluster-agent-7bcbf99f56-vdrbs -n cattle-system
INFO: Environment: CATTLE_ADDRESS=10.42.1.5 CATTLE_CA_CHECKSUM= CATTLE_CLUSTER=true CATTLE_INTERNAL_ADDRESS= CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-7bcbf99f56-vdrbs CATTLE_SERVER=https://massimo.rnchr.nl
INFO: Using resolv.conf: nameserver 10.43.0.10 search cattle-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5
ERROR: https://massimo.rnchr.nl/ping is not accessible (Could not resolve host: massimo.rnchr.nl)
```
Using curl from outside of the cattle agent pod
```
root@massimo-server:~# curl https://massimo.rnchr.nl/ping
pong
```
**Environment information**
- Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI):
**rancher/rancher v2.1.1**
- Installation option (single install/HA): **HA**
**Cluster information**
- Cluster type (Hosted/Infrastructure Provider/Custom/Imported): **RKE provisioned**
- Machine type (cloud/VM/metal) and specifications (CPU/memory): **cloud, 2 CPU/4 GB**
- Kubernetes version (use `kubectl version`):
```
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
```
- Docker (use `docker info`):
```
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 1
Server Version: 17.03.2-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 5
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-138-generic
Operating System: Ubuntu 16.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.859 GiB
Name: massimo-server
ID: 747D:4N33:DFHF:IWOG:Z3RK:G22I:3663:ZSHQ:5HKF:SOGK:M3V4:EKQA
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
```
gz#11051
But why does that the trick?
Solved this by editing the deployment and changing the DNS policy from ClusterFirst to Default:
kubectl edit deployment cattle-cluster-agent -n cattle-system
…
dnsPolicy: Default
It works now after that change but I don’t get it