Localhost:8080 was refused

Hello all,

I am a total noob regarding Kubernetes so please forgive me if this is an obvious one and I’m just not seeing it.

I’ve been following this GUIDE to set up a cluster of Raspberry Pi’s.

I believe I have the server up and running, but both of my agents are returning an error when I run kubectl get nodes -o wide they both return The connection to the server localhost:8080 was refused - did you specify the right host or port?

Any guidance you can lend would be most appreciated.

Cheers…

On my cluster, I receive the same message when running the command from nodes instead of the master, maybe that’s doing it?

@Magnus_Grongaard yes this is what is causing it. I learned that the other night with help from the Rancher Slack channel. Forgot to circle back and update this post.

Cheers…

Got the same issue here. It’s the only node I have so I assumed it to already be a master.
config:

hostname: k3s-master-60
ssh_authorized_keys:
  - xxx

write_files:
  - path: /var/lib/connman/default.config
    content: |-
      [service_eth0]
      Type=ethernet
      IPv4=192.168.178.60/255.255.255.0/192.168.178.60
      IPv6=off
      Nameservers=192.168.178.52

k3os:
  ntp_servers:
  - 0.de.pool.ntp.org
  - 1.de.pool.ntp.org
  - 2.de.pool.ntp.org
  - 3.de.pool.ntp.org
  - ptbtime1.ptb.de
  - ptbtime2.ptb.de
  - ptbtime3.ptb.de
  dns_nameservers:
  - 8.8.8.8
  - 1.1.1.1
  token: serverSecret
  password: xxx 

where serverSecret is the actual text for the token.
I even tried a config with

k3s_args:
  - server
  - "--disable-agent"

and without setting the token but it did’nt help.

Since Release v0.11.1 · rancher/k3os · GitHub did not boot on my rpi4 I used GitHub - sgielen/picl-k3os-image-generator: Generate images for k3os compatible with various armv8/aarch64/arm64 devices to generate my img. I don’t know if it matters =)

Screenshot of the error and netstat:

What am I missing?

Thanks in advance!

And as a side problem… my time is broken as well:

Sorry for the split of the comments, but I am not allowed to put that Screenshot into the first comment, because my account is new. =)

I found the solution myself:
I randomly stumbled across the solution after punching my rpi4 with all commands I could find. After a lot of trial and error which commands were the ones I actually needed I shrinked it to the addition of the following config. No guarantees, that it is working 100% of the time but I tried it a few times booting the image after a new flash and it worked all the time.

boot_cmd:
  - "ln -sf /etc/init.d/swclock /etc/runlevels/boot/swclock"
run_cmd:
  - "service ntpd start"

Source of the needed code:

Edit: Forgot to mention that the time issue caused the initial localhost:8080 issue from above. That’s why the new config just starts the time service again after it crashed in the original call by k3os itself.

This could help you if you haven’t managed to solve the problem.

  1. First check if the k3s service is running
  • systemctl status
  1. If it is running then copy the k3s.yaml file
  • sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

This should work.

Hi 2 years later I am dealing with the same problem, again it is a raspbery pi and a k3s installation on the pi this time it is an arm64. All the solutions above did not help. It looks like that the kubectl does not even consider the config file and tries to contact the server at 8080, while it is running on 6443.

:persevere: Please help

1 Like

Okay, so as i understand it

  1. The errors are Expected when run from a Node rather than the master. This is because the node does not have a config file by default.
  2. To address this, copy the config file from the master /etc/rancher/k3s/k3s.yaml to a new folder on the agent ~/.kube/k3s.yaml

Running k3s kubectl get node on the agent then gave a similar error, but mentioned port 127.0.0.1:6443 rather than the prior 127.0.0.1:8080.

This led me to modify the newly copied file on the agent to point to the masters IP address
server: https://192.168.10.211:6443

Running k3s kubectl get node on the agent then gave the expected response
NAME STATUS ROLES AGE VERSION
rpi3 Ready 173m v1.28.4+k3s2
rpi1 Ready control-plane,master 5d2h v1.27.7+k3s2
rpi2 Ready 4d6h v1.28.4+k3s2
rpi4 Ready 170m v1.28.4+k3s2

This solution FEEELS wrong, but APPEARS to be working. Happy for someone to educate me further.

@Lyndsey_Paxton Hi no that is the correct/preferred/suggested way… I don’t do anything on the nodes themselves, just on a local machine (I’m on openSUSE Tumbleweed)…

mkdir -p ~/some_cluster
cd some_cluster
scp -- root@master-xxx:/etc/rancher/k3s/k3s.yaml $(pwd)
sed -i 's/127.0.0.1/xxx.xxx.xxx.xxx/g' k3s.yaml
export KUBECONFIG=$(pwd)/k3s.yaml

kubectl get nodes