Installing Rancher with kvm

I’ve setup 3 KVMs on my localhost and am trying to install Rancher 2.6. I followed the instructions at Rancher Docs: Helm CLI Quick Start. However when I go to https://<ip.of.kubernetes.master> I just see a 404. I don’t know how to troubleshoot. If I run kubectl get nodes I do see my node in a ready state.

Although probably not of any help, I’m copying the output from kubectl to show that it is running.

root@kubemaster:~/.kube# kubectl get node
NAME         STATUS   ROLES                  AGE   VERSION
kubemaster   Ready    control-plane,master   24m   v1.23.6+k3s1
root@kubemaster:~/.kube# kubectl describe node kubemaster
Name:               kubemaster
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=k3s
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=kubemaster
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=true
                    node-role.kubernetes.io/master=true
                    node.kubernetes.io/instance-type=k3s
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"46:94:f1:a0:7a:58"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.0.30
                    k3s.io/hostname: kubemaster
                    k3s.io/internal-ip: 192.168.0.30
                    k3s.io/node-args: ["server"]
                    k3s.io/node-config-hash: 4AOUITUOKVWS7HOOJ2IKMYYTISJGUJRZIDAUI6PHJAJYYPKK2DHA====
                    k3s.io/node-env: {"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/8c2b0191f6e36ec6f3cb68e2302fcc4be850c6db31ec5f8a74e4b3be403101d8"}
                    management.cattle.io/pod-limits: {"memory":"170Mi"}
                    management.cattle.io/pod-requests: {"cpu":"200m","memory":"140Mi","pods":"13"}
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 20 Jun 2022 21:29:20 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  kubemaster
  AcquireTime:     <unset>
  RenewTime:       Mon, 20 Jun 2022 21:54:02 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Mon, 20 Jun 2022 21:53:09 +0000   Mon, 20 Jun 2022 21:29:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Mon, 20 Jun 2022 21:53:09 +0000   Mon, 20 Jun 2022 21:29:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Mon, 20 Jun 2022 21:53:09 +0000   Mon, 20 Jun 2022 21:29:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Mon, 20 Jun 2022 21:53:09 +0000   Mon, 20 Jun 2022 21:29:31 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.0.30
  Hostname:    kubemaster
Capacity:
  cpu:                2
  ephemeral-storage:  41688972Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             4020092Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  40555031930
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             4020092Ki
  pods:               110
System Info:
  Machine ID:                 a5255b4a7a944a7791958091ffa00fcb
  System UUID:                a5255b4a-7a94-4a77-9195-8091ffa00fcb
  Boot ID:                    a3469417-8737-40c3-a07e-f1a3635f3f17
  Kernel Version:             5.13.0-51-generic
  OS Image:                   Ubuntu 21.10
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.5.11-k3s2
  Kubelet Version:            v1.23.6+k3s1
  Kube-Proxy Version:         v1.23.6+k3s1
PodCIDR:                      10.42.0.0/24
PodCIDRs:                     10.42.0.0/24
ProviderID:                   k3s://kubemaster
Non-terminated Pods:          (13 in total)
  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
  kube-system                 local-path-provisioner-6c79684f77-vlzps    0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
  kube-system                 coredns-d76bd69b-p5rlq                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     24m
  kube-system                 svclb-traefik-m5hkj                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
  kube-system                 metrics-server-7cd5fcb6b7-tfqtv            100m (5%)     0 (0%)      70Mi (1%)        0 (0%)         24m
  kube-system                 traefik-df4ff85d6-cv2bf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
  cert-manager                cert-manager-cainjector-9b679cc6-mf5mf     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
  cert-manager                cert-manager-76d44b459c-4w94v              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
  cert-manager                cert-manager-webhook-57c994b6b9-x926b      0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
  cattle-system               rancher-7759f6cf79-nbhfl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
  cattle-fleet-system         gitjob-6b977748fc-txfl7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
  cattle-fleet-system         fleet-controller-784d6fbcd8-krb22          0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
  cattle-system               rancher-webhook-5b65595df9-2jnbn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
  cattle-fleet-local-system   fleet-agent-699b5fb945-mjfd5               0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                200m (10%)  0 (0%)
  memory             140Mi (3%)  170Mi (4%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type     Reason                   Age                From                   Message
  ----     ------                   ----               ----                   -------
  Normal   Starting                 24m                kube-proxy             
  Normal   NodeHasSufficientPID     24m (x2 over 24m)  kubelet                Node kubemaster status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  24m                kubelet                Updated Node Allocatable limit across pods
  Warning  InvalidDiskCapacity      24m                kubelet                invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  24m (x2 over 24m)  kubelet                Node kubemaster status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet                Node kubemaster status is now: NodeHasNoDiskPressure
  Normal   Starting                 24m                kubelet                Starting kubelet.
  Normal   Synced                   24m                cloud-node-controller  Node synced successfully
  Normal   NodeReady                24m                kubelet                Node kubemaster status is now: NodeReady
  Normal   RegisteredNode           24m                node-controller        Node kubemaster event: Registered Node kubemaster in Controller
root@kubemaster:~/.kube# 
root@kubemaster:~/.kube# 

Hi and thank you for your question!
If you followed the instructions in the doc page you mention, you should have provided at some point a helm install command:

helm install rancher rancher-latest/rancher \
  --namespace cattle-system \
  --set hostname=<IP_OF_LINUX_NODE>.sslip.io \
  --set replicas=1 \
  --set bootstrapPassword=<PASSWORD_FOR_RANCHER_ADMIN>

You can see that Rancher needs a --hostname parameter. That’s the address to which the Ingress Controller will be listening to, in order to re-direct you to Rancher.

Can you please use https://<ip.of.kubernetes.master>.sslip.io instead of just the IP address? If that does not solve your issue, please let us know.

I’m reinstalling Rancher but getting another error.

sudo kubectl create namespace cattle-system
The connection to the server localhost:8080 was refused - did you specify the right host or port?

I might need a kubeconfig but don’t know where that is.

The localhost:8080 error is what you get with no kubeconfig file. The default location that kubectl & helm looks for it is at ~/.kube/config . I’m not sure what conditions sudo will use your normal user’s home vs root’s home. I know for most of the environment variables you need the -i option to get it to add root’s environment (like adding the various sbin directories to $PATH on some systems), but not sure with home (I tend to always use -i so I always end up with root’s home). For kubectl you can use the --kubeconfig ${FILE_PATH} option to specify for certain if you don’t want to mess with that.

Where you can find the kubeconfig file depends on how you install Kubernetes, but I think it’ll usually toss it somewhere in /etc on the master. For example, the last time I installed vanilla Kubernetes it put it at /etc/kubernetes/admin.conf and RKE2 will put it at /etc/rancher/rke2/rke2.yaml. So you’ll have to consult the docs for however you installed Kubernetes. However, note that that config is not one you want to pass around because I believe it contains a cert that can’t be invalidated, so you’d want to generate different kubeconfig files for what you pass around.