Self-signed certs prevent kubectl from working

I just did a fresh, clean install of rke version v1.28.10+rke2r1 using the install scripts on Ubuntu 20.04.6 LTS. I’m running the command from the master server but it fails on any other system where I install the .kube/config file.

When I run any kubectl command, I see:

E0529 18:15:36.038865  914067 memcache.go:238] couldn't get current server API group list: Get "https://127.0.0.1:6443/api?timeout=32s": x509: certificate signed by unknown authority
Unable to connect to the server: x509: certificate signed by unknown authority

The installer is supposed to generate new certs and keys, right?

Thanks for any help.

@thezog did you edit the config file on the other systems to change 127.0.0.1 to the ip address of the RKE2 node…

Are you talking about config.yaml or rke2.yaml?

And BTW, all the nodes are rke2 nodes. Should I assume you mean the IP address of the master node?

@thezog rke2.yaml I normally have multiple directories for notes etc, so my SoP is;

scp -- root@node:/etc/rancher/rke2/rke2.yaml $(pwd)
sed -i 's/127.0.0.1/xxx.xxx.xxx.xxx/g' rke2.yaml
export KUBECONFIG=$(pwd)/rke2.yaml

Weird. I sure don’t remember needing to copy the rke2.yaml from master to slave (agent) nodes.

On the agent node I see,
May 29 22:44:43 cp2 rke2[49211]: time="2024-05-29T22:44:43Z" level=fatal msg="Failed to reconcile with temporary etcd: bootstrap data already found and encrypted with different token"

I guess I need to wipe the old etcd database somewhere. Probably the local
Or something.

No that’s for a local machine to contact the cluster nodes/master

@thezog Did you edit the config.yaml and add tls-san?

Please archive this tread. I had some old config running in this cluster. there was a broken nginx ingress set up. I wiped the entire cluster of VMs.