Couldn't get resource list for metrics.k8s.io

I’m following the instructions at Helm CLI Quick Start | Rancher to set up a cluster on my server with 3 VMs. I install k3s using these instructions and did not specify a k3s version. Afterwards, I copied k3s.yaml to ~/.kub/config and ran kubectl get node. The following is the result:

ubuntu@kubemaster:~$ sudo kubectl get node
E0606 23:35:32.855132    4680 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0606 23:35:32.864792    4680 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0606 23:35:32.868188    4680 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0606 23:35:32.869954    4680 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
NAME         STATUS   ROLES                       AGE   VERSION
kubemaster   Ready    control-plane,etcd,master   24m   v1.26.5+k3s1
ubuntu@kubemaster:~$ 

I saw another post here on the same topic and it said that the issue was a firewall, but with no firewall running on the master VM, I’m not sure what could be the issue.

Typically, when you see an error like this, metrics.k8s.io/v1beta1 is the victim not the cause. metrics.k8s.io/v1beta1 appears to be sensitive to other problems.

The real cause tends to be something else which is ill, and metrics.k8s.io/v1beta1 itself is just a red herring.

What would the next steps be to find what else is ill? I get the same output on 2 out of 3 nodes in an HA RKE2 cluster. The kubectl command always still returns the expected output, but on those nodes its preceeded by the error shown. All pods are running, all API services are available, and I’m finding nothing amiss in any log output I look at.