127.0.0.1:6443 was refused

I have setup the k3s cluster at home and it runs ok.
But when I move the whole set to school, I couldn’t even see my own node,
sudo k3s kubectl get node
Connection to the server 127.0.0.1:6443 was refused - did you specific the right host or port?
I am sure I did not modify any IP address (static) of the cluster.

I have even setup the router with the same subnet. Just to be sure the IP addresses do not change.

Is there anything I should check? Thx
If the problem cannot be solved, I don’t mind to reset the system, how can I do that?
Thanks.

When I typed
sudo netstat -atunp
127.0.0.1:6444 is the k3s agent? (not 6443?)
I am looking for a way to change back the port number…

I think the error comes from the server line in your ~/.kube/config file. You might be able to just change the port on that line and have things work, but the risk there is all the other things that need it might need configured too.

Most of the time when I’ve seen an increment and the wrong port, it’s because something was using the right port when it started up. A clean reboot would normally fix that, especially if it’s just something grabbing it by chance. If you have another startup service grabbing it and dying, that’d require tracking down and stopping that.

I may be missing it with all the other cruft in the way, but I don’t see the etcd ports nor 9345 listening either and I’d expect those with RKE2 (I haven’t done much with K3S, so maybe it doesn’t use them?). I usually don’t care about UDP and omit the ‘u’ in the command line and toss a | grep LISTEN at the end of it to just get the listening ports (I’ve found the ‘l’ flag in netstat less reliable for some reason).

1 Like

Hi, thanks for the reply.
The funny part is that the cluster works at home, but not in the office. I have setup my office router to be the same as the one at home. (static IP with no firewall internally). In theory, it should work right?
I didn’t install anything except testing the k3s, 6443 should be free. I have restarted the board and still have the same problem.
I am worried if I use the k3s in a real job and it crashes like it, how can I investigate the problem…

I followed this link before and it works at home, Running K3s, Lightweight kubernetes on NV Jetson cluster - Hackster.io

I have tried to install the k3s again but there is error said it is already installed. (I didn’t make a capture screen last night…)

I may try to uninstall the k3s, Rancher Docs: Uninstalling K3s

and install it again.
Worst case is the burn another SD card image and install the master.

Thanks,

You could try journalctl -u ${SERVICE_NAME} for your K3S services, especially the one that’s supposed to be 6443 and see if you see an error. Another thing might be to try disabling the K3S services and manually starting them up after boot, making sure nothing’s squatting on 6443.

If it happens more than once then seems likely that maybe it’s trying to start it twice or something? Maybe systemD (which I hate and don’t trust in the slightest) tries starting it, loses communication with the process that’s still running and has the port and tries restarting it again and so on the second start attempt the port’s already grabbed by the first process that systemD already abandoned and plans to kill.

It’s usually fixed with a clean reset, especially if it’s simply something grabbing it by accident. If another startup service is stealing it and dying, you’ll need to track it down and stop it.