Resolved: if using microk8s, use: Import Existing Cluster
.
Hi All! I’m new to Kubernetes and struggling to get a remote Rancher build to provision a home server running MicroK8s. I have all my ports forwarded and I can stand up other containers using the host docker client with any of those ports functioning.
Right now, when I execute the respective command to load the Rancher agent on my local server, I get the following error initially on Rancher:
Cluster must have at least one etcd plane host: failed to connect to the following etcd host(s) [123.123.1.2]
It’ll run through a few more entries and end on:
[[network] Host [123.123.1.2] is not able to connect to the following ports: [192.168.1.123:2379]. Please check network policies and firewall rules]
TL;DR the problem seems to be in my third rancher-agent
container, which logs:
+ docker start kubelet
Error response from daemon: {"message":"No such container: kubelet"}
Error: failed to start containers: kubelet
I should also add that the hostname of my Microk8s server is rancher
.
I was clearing my existing containers with the following:
#!/bin/sh
sudo docker rm -f $(sudo docker ps -qa)
sudo docker volume rm $(sudo docker volume ls -q)
cleanupdirs="/var/lib/etcd /etc/kubernetes /etc/cni /opt/cni /var/lib/cni /var/run/calico"
for dir in $cleanupdirs; do
echo "Removing $dir"
sudo rm -rf $dir
done
and running the following on my host:
sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.2.2 --server https://remote.server.com --token somelongtoken123doncheadle --address 123.123.1.2 --internal-address 192.168.1.123 --etcd --controlplane --worker
Logs, etc:
sudo docker ps
:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5654f69df25b rancher/rke-tools:v0.1.27 "nc -kl -p 1337 -e e…" About a minute ago Up About a minute 80/tcp, 0.0.0.0:6443->1337/tcp rke-cp-port-listener
1a21b71a674e rancher/rancher-agent:v2.2.2 "run.sh --server htt…" 2 minutes ago Up 2 minutes flamboyant_lewin
50a98374dbc8 rancher/rancher-agent:v2.2.2 "run.sh -- share-roo…" 2 minutes ago Up 2 minutes share-mnt
5220e38fe654 rancher/rancher-agent:v2.2.2 "run.sh --server htt…" 2 minutes ago Up 2 minutes keen_mirzakhani
sudo docker logs 5654f69df25b
:
(no data)
sudo docker logs 1a21b71a674e
:
INFO: Arguments: --server https://remote.server.com --token REDACTED --no-register --only-write-certs
INFO: Environment: CATTLE_ADDRESS=192.168.1.123 CATTLE_AGENT_CONNECT=true CATTLE_INTERNAL_ADDRESS= CATTLE_NODE_NAME=rancher CATTLE_SERVER=https://remote.server.com CATTLE_TOKEN=REDACTED CATTLE_WRITE_CERT_ONLY=true
INFO: Using resolv.conf: nameserver 127.0.0.53 options edns0 search lan
WARN: Loopback address found in /etc/resolv.conf, please refer to the documentation how to configure your cluster to resolve DNS properly
INFO: https://remote.server.com/ping is accessible
INFO: https://remote.server.com resolves to 234.234.2.3
time="2019-05-22T19:05:40Z" level=info msg="Rancher agent version v2.2.2 is starting"
time="2019-05-22T19:05:40Z" level=info msg="Option etcd=false"
time="2019-05-22T19:05:40Z" level=info msg="Option controlPlane=false"
time="2019-05-22T19:05:40Z" level=info msg="Option worker=false"
time="2019-05-22T19:05:40Z" level=info msg="Option requestedHostname=rancher"
time="2019-05-22T19:05:40Z" level=info msg="Option customConfig=map[address:192.168.1.123 internalAddress: roles:[] label:map[]]"
time="2019-05-22T19:05:40Z" level=info msg="Listening on /tmp/log.sock"
time="2019-05-22T19:05:40Z" level=info msg="Connecting to wss://remote.server.com/v3/connect with token 48ks46b6lb9j5xht7ms9gsdhfjrnnv57fzlzcjsthcn8fxj5f6rd8t"
time="2019-05-22T19:05:40Z" level=info msg="Connecting to proxy" url="wss://remote.server.com/v3/connect"
time="2019-05-22T19:05:40Z" level=info msg="Starting plan monitor"
sudo docker logs 50a98374dbc8
:
INFO: Arguments: --server https://remote.server.com --token REDACTED --address 123.123.1.2 --internal-address 192.168.1.123 --etcd --controlplane --worker
INFO: Environment: CATTLE_ADDRESS=123.123.1.2 CATTLE_INTERNAL_ADDRESS=192.168.1.123 CATTLE_NODE_NAME=rancher CATTLE_ROLE=,etcd,worker,controlplane CATTLE_SERVER=https://remote.server.com CATTLE_TOKEN=REDACTED
INFO: Using resolv.conf: nameserver 127.0.0.53 options edns0 search lan
WARN: Loopback address found in /etc/resolv.conf, please refer to the documentation how to configure your cluster to resolve DNS properly
INFO: https://remote.server.com/ping is accessible
INFO: remote.server.com resolves to 35.202.23.120
time="2019-05-22T19:05:24Z" level=info msg="Rancher agent version v2.2.2 is starting"
time="2019-05-22T19:05:24Z" level=info msg="Option etcd=true"
time="2019-05-22T19:05:24Z" level=info msg="Option controlPlane=true"
time="2019-05-22T19:05:24Z" level=info msg="Option worker=true"
time="2019-05-22T19:05:24Z" level=info msg="Option requestedHostname=rancher"
time="2019-05-22T19:05:24Z" level=info msg="Option customConfig=map[address:123.123.1.2 internalAddress:192.168.1.123 roles:[etcd worker controlplane] label:map[]]"
time="2019-05-22T19:05:24Z" level=info msg="Listening on /tmp/log.sock"
time="2019-05-22T19:05:24Z" level=info msg="Connecting to wss://remote.server.com/v3/connect/register with token rc2vvjbmz4tl4fgl97wpgp6s7kwwx4drf9ffjhmbkc922pnc5khrz2"
time="2019-05-22T19:05:24Z" level=info msg="Connecting to proxy" url="wss://remote.server.com/v3/connect/register"
time="2019-05-22T19:05:24Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
time="2019-05-22T19:05:24Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
time="2019-05-22T19:05:34Z" level=info msg="Connecting to wss://remote.server.com/v3/connect/register with token rc2vvjbmz4tl4fgl97wpgp6s7kwwx4drf9ffjhmbkc922pnc5khrz2"
time="2019-05-22T19:05:34Z" level=info msg="Connecting to proxy" url="wss://remote.server.com/v3/connect/register"
time="2019-05-22T19:05:37Z" level=info msg="Starting plan monitor"
sudo docker logs 5220e38fe654
INFO: Arguments: -- share-root.sh docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.2.2 --server https://remote.server.com --token REDACTED --no-register --only-write-certs /var/lib/kubelet /var/lib/rancher
+ trap 'exit 0' SIGTERM
++ grep :devices: /proc/self/cgroup
++ head -n1
++ awk -F/ '{print $NF}'
++ sed -e 's/docker-\(.*\)\.scope/\1/'
+ ID=50a98374dbc8ca1f7a22b3be0f9c8c81170b36ecd85810a5426b25ca3a588b03
++ docker inspect -f '{{.Config.Image}}' 50a98374dbc8ca1f7a22b3be0f9c8c81170b36ecd85810a5426b25ca3a588b03
+ IMAGE=rancher/rancher-agent:v2.2.2
+ bash -c 'docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.2.2 --server https://remote.server.com --token 48ks46b6lb9j5xht7ms9gsdhfjrnnv57fzlzcjsthcn8fxj5f6rd8t --no-register --only-write-certs'
1a21b71a674e40c68d40ebb2e5ddeb01f3cb9a99474300619d4ce8e7b5e380e9
+ docker run --privileged --net host --pid host -v /:/host --rm --entrypoint /usr/bin/share-mnt rancher/rancher-agent:v2.2.2 /var/lib/kubelet /var/lib/rancher -- norun
+ docker start kubelet
Error response from daemon: {"message":"No such container: kubelet"}
Error: failed to start containers: kubelet
+ sleep 2
+ docker start kubelet
Error response from daemon: {"message":"No such container: kubelet"}
Error: failed to start containers: kubelet
+ sleep 2
+ docker start kubelet
Error response from daemon: {"message":"No such container: kubelet"}
Error: failed to start containers: kubelet
+ sleep 2
+ docker start kubelet
Error response from daemon: {"message":"No such container: kubelet"}
Error: failed to start containers: kubelet
+ sleep 2
+ docker start kubelet
Error response from daemon: {"message":"No such container: kubelet"}
Error: failed to start containers: kubelet
+ sleep 2
+ docker start kubelet
Error response from daemon: {"message":"No such container: kubelet"}
Error: failed to start containers: kubelet
+ sleep 2
+ docker start kubelet
Error response from daemon: {"message":"No such container: kubelet"}
Error: failed to start containers: kubelet
+ sleep 2