K3s agent accessing master node

Anyone get an k3s agent working going against a k3s server on a network reachable host?

I’ve copied my master node’s /var/lib/rancher/k3s/server/node-token over to the agent node.(I’ve created the below agent1 directory as we have a cluster on this box, I need to run an agent to join a cluster whose master node is a different box). I’ve also copied over the master node’s /etc/rancher/k3s/k3s.yaml to the same agent1 directory.

kubectl --kubeconfig=/var/lib/rancher/k3s/agent1/kube-master.config cluster-info
=> Kubernetes master is running at https://155.246.39.26:6443

So I can perform various kubectl commands listing k8s resources on the master node at 39.26 just fine.

But using the master URL as the agent’s --server URL :

k3s agent --server https://155.246.39.26:6443 --token "cat /var/lib/rancher/k3s/agent1/cluster_token.dat" --data-dir /var/lib/rancher/k3s/agent1/data --node-label 'worker-node=node2'

I’m getting this error. No idea why as i’m using the token as the documentation indicates what is necessary.
INFO[2020-04-14T10:19:34.593642007-04:00] Starting k3s agent v0.6.1 (7ffe802a)
ERRO[2020-04-14T10:19:34.673690759-04:00] https://155.246.39.26:6443/v1-k3s/config: 401 Unauthorized

Clearly, I’m missing some other piece that’s needed. I’d appreciate any pointers, insights, suggestions.
Thanks,
-steve

– 1st edit –
I meant to include the version info.
Going back to get that, I discovered that the master node is running k3s v1.17.2+k3s1 (cdab19b0) while the agent node has version v0.6.1 (7ffe802a). Clearly this might not be ideal. I’m going to update the agent node’s k3s binary to match the master node & retry.

Yep - installing v1.17.2+k3s1 on the agent box (using an alternate bin location via INSTALL_K3S_BIN_DIR) seems to have helped.

k3s-bin/k3s agent --server https://155.246.39.26:6443 --token "cat /var/lib/rancher/k3s/agent1/cluster_token.dat" --data-dir /var/lib/rancher/k3s/agent1/data --node-label 'worker-node=node2'

=> INFO[0000] Preparing data dir /var/lib/rancher/k3s/agent1/data/data/7c4aaa633ac3ff4849166ba2759da158a70beb5734940e84b6e67011a35f4c59
INFO[2020-04-14T11:14:31.239647440-04:00] Starting k3s agent v1.17.2+k3s1 (cdab19b0)
INFO[2020-04-14T11:14:31.239992693-04:00] module overlay was already loaded

running into some binding issues - because I’m running this agent on a box that already has the k3s kubelet running for a different cluster (therefore it has an existing master node setup, including its kubelet).

Hopefully it’s just a case of having to change the kubelet port (10250) and the healthz server port (10248). Via --kubelet-arg, --kube-proxy-arg options.
If not, I’ll just get another box allocated.

I cloned the github repo earlier, looking for where ExtraKubeletArgs is used, found that the default port 10250 is not configurable. Without the ability to change the ports binding sockets to, not clear how one could run more than 1 agent and kubelet on a given host, that it is currently a constraint that a given host can only host (pun) 1 node. Good to know. If I’m wrong, please provide evidence as to how so. I’ll go with the evidence. as I’ve been wrong before.