Rancher 2.x ending up at default backend 404

Hi all,

I deployed a 3node rancher cluster with RKE but I keep ending up at the “default backend - 404” page instead of the webui. Is there something I’m missing here?

I’m not 100% sure of my cluster config file, but there are big differences between config examples across the web.

cluster_config.yml:


nodes:

  • address: 192.168.122.6
    user: wouter
    role:
    • controlplane
    • etcd
    • worker
      ssh_key_path: ~/.ssh/id_rsa
      port: 22
  • address: 192.168.122.7
    user: wouter
    role:
    • controlplane
    • etcd
    • worker
      ssh_key_path: ~/.ssh/id_rsa
      port: 22
  • address: 192.168.122.8
    user: wouter
    role:
    • controlplane
    • etcd
    • worker
      ssh_key_path: ~/.ssh/id_rsa
      port: 22

services:
etcd:
image: rancher/coreos-etcd:v3.1.12
kube-api:
image: rancher/hyperkube:v1.10.1-rancher2
service_cluster_ip_range: 10.43.0.0/16
pod_security_policy: false
kube-controller:
image: rancher/hyperkube:v1.10.1-rancher2
cluster_cidr: 10.42.0.0/16
service_cluster_ip_range: 10.43.0.0/16
scheduler:
image: rancher/hyperkube:v1.10.1-rancher2
kubelet:
image: rancher/hyperkube:v1.10.1-rancher2
cluster_domain: lab.local
cluster_dns_server: 10.43.0.10
infra_container_image: rancher/pause-amd64:3.1
kubeproxy:
image: rancher/hyperkube:v1.10.1-rancher2
extra_args: {}
extra_binds:

network:
plugin: canal

authentication:
strategy: x509

authorization:
mode: rbac

cluster_name: wouterdev

ingress:
provider: nginx

kubectl get pods --all-namespaces:

ingress-nginx default-http-backend-564b9b6c5b-pk642 1/1 Running 0 1m
ingress-nginx nginx-ingress-controller-4zwx2 1/1 Running 0 1m
ingress-nginx nginx-ingress-controller-fgdn9 1/1 Running 0 1m
ingress-nginx nginx-ingress-controller-wsqrw 1/1 Running 0 1m
kube-system canal-rjkl2 3/3 Running 0 2m
kube-system canal-w2qn9 3/3 Running 0 2m
kube-system canal-xnv8p 3/3 Running 1 2m
kube-system kube-dns-6748949cc9-prpg5 3/3 Running 0 1m
kube-system kube-dns-autoscaler-6c4b786f5-djf7s 1/1 Running 0 1m
kube-system rke-ingress-controller-deploy-job-khzmf 0/1 Completed 0 1m
kube-system rke-kubedns-addon-deploy-job-bkz2q 0/1 Completed 0 1m
kube-system rke-network-plugin-deploy-job-t4cc2 0/1 Completed 0 2m

kubectl get nodes:

wdk-docker Ready controlplane,etcd,worker 10m v1.10.1
wdk-docker2 Ready controlplane,etcd,worker 10m v1.10.1
wdk-docker3 Ready controlplane,etcd,worker 10m v1.10.1

kubectl get services:

kubernetes ClusterIP 10.43.0.1 443/TCP 10m

+1 I am getting the same default backend 404 instead of the rancher ui

RKE cluster is not by default including Rancher, you need to follow the HA docs: https://rancher.com/docs/rancher/v2.x/en/installation/ha-server-install/

Thanks Seb. This was caused by the rancher controller not starting properly due to certs. Regenerated certs and retried the rke up and it works.

Hi wdk,
Recently I have tried to deploy Rancher 2.0 with HA follow the HA docs as Rancher home page. But now I’m having the same error as you (default backend - 404) althought I have already added certs. Could you teach me the correct way to run it? Thanks.

If kubectl --kubeconfig kube_config_rancher-cluster.yml get pods --all-namespaces all show Running or Completed, then kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l job-name=rke-user-addon-deploy-job -n kube-system will show the logs from the nginx to help retrieve what’s wrong.

hello, i have same problem default backend - 404
i try command from superseb:

 kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l job-name=rke-user-addon-deploy-job -n kube-system
namespace "cattle-system" created
serviceaccount "cattle-admin" created
clusterrolebinding.rbac.authorization.k8s.io "cattle-crb" created
service "cattle-service" created
ingress.extensions "cattle-ingress-http" created
deployment.extensions "cattle" created

kubectl --kubeconfig=kube_config_rancher-cluster.yml get pods --all-namespaces
NAMESPACE       NAME                                      READY     STATUS      RESTARTS   AGE
cattle-system   cattle-56c896597d-j2qf2                   1/1       Running     0          18s
ingress-nginx   default-http-backend-564b9b6c5b-z5g4z     1/1       Running     0          23s
ingress-nginx   nginx-ingress-controller-2bx9c            1/1       Running     0          23s
ingress-nginx   nginx-ingress-controller-rtmdc            1/1       Running     0          23s
ingress-nginx   nginx-ingress-controller-tndft            1/1       Running     0          23s
kube-system     canal-bgdxj                               3/3       Running     0          34s
kube-system     canal-hcgr7                               3/3       Running     0          34s
kube-system     canal-qrrd8                               3/3       Running     0          34s
kube-system     kube-dns-5ccb66df65-t4wjr                 3/3       Running     0          29s
kube-system     kube-dns-autoscaler-6c4b786f5-8rb4q       1/1       Running     0          28s
kube-system     rke-ingress-controller-deploy-job-nzc9c   0/1       Completed   0          25s
kube-system     rke-kubedns-addon-deploy-job-4cr88        0/1       Completed   0          30s
kube-system     rke-network-plugin-deploy-job-l9jmn       0/1       Completed   0          35s
kube-system     rke-user-addon-deploy-job-r57nj           0/1       Completed   0          20s

If that is all successful, Running and Completed, the nginx ingress logs should show why it can’t be reached:

kubectl --kubeconfig=kube_config_rancher-cluster.yaml logs -l app=ingress-nginx -n ingress-nginx
1 Like

Tried the helm from rancher-stable but no go.
same error as of today.