Kubernetes Fake Certificates are used instead of custom defined self-signed certs - cluster fails to work

Hi

I did a HA install using 3 nodes and an external L4 load balancer (nginx) according to this doc: https://rancher.com/docs/rancher/v2.x/en/installation/ha/rke-add-on/layer-4-lb/#b-create-nginx-configuration.

During deployment, I received no errors. It is based on 3 Ubuntu 16.04 boxes, running on VirtualBox using a host-only network, which is shared between the machines. Also, a DNS resolver (bind) and a load balancer VM are running in this network.

The load balancer is configured to pass through SSL traffic to the nodes and answers to https:// [placeholder] rancher.rancher.lab.

I created a CA using gnomint and one cert (CN=rancher.rachner.lab) for the rke deployment.

All boxes can ping each other and (reverse) resolve each others domain names (node1.rancher.lab, node2.rancher.lab, node3.rancher.lab).

I was able to access the rancher webui and set the server url to https://rancher.rancher.lab.

For some reason, the cattle-cluster-agent and cattle-node-agent fail and crashloop.

I was able to retrieve the following error messages:
kubectl logs -n cattle-system cattle-cluster-agent-6f894484d9-qr9dj -f

INFO: Environment: CATTLE_ADDRESS=10.42.1.2 CATTLE_CA_CHECKSUM=479485d7bc0bf86102419eef5e7132c75feb84ca11bd0ebf4d605bd023a8f247 CATTLE_CLUSTER=true CATTLE_INTERNAL_ADDRESS= CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-6f894484d9-qr9dj CATTLE_SERVER=https://rancher.rancher.lab CATTLE_SERVICE_PORT=tcp://10.43.223.109:80 CATTLE_SERVICE_PORT_443_TCP=tcp://10.43.223.109:443 CATTLE_SERVICE_PORT_443_TCP_ADDR=10.43.223.109 CATTLE_SERVICE_PORT_443_TCP_PORT=443 CATTLE_SERVICE_PORT_443_TCP_PROTO=tcp CATTLE_SERVICE_PORT_80_TCP=tcp://10.43.223.109:80 CATTLE_SERVICE_PORT_80_TCP_ADDR=10.43.223.109 CATTLE_SERVICE_PORT_80_TCP_PORT=80 CATTLE_SERVICE_PORT_80_TCP_PROTO=tcp CATTLE_SERVICE_SERVICE_HOST=10.43.223.109 CATTLE_SERVICE_SERVICE_PORT=80 CATTLE_SERVICE_SERVICE_PORT_HTTP=80 CATTLE_SERVICE_SERVICE_PORT_HTTPS=443
INFO: Using resolv.conf: nameserver 10.43.0.10 search cattle-system.svc.cluster.local svc.cluster.local cluster.local 0x06 [placeholder] .io options ndots:5
ERROR: https:// [placeholder] rancher.rancher.lab/ping is not accessible (Could not resolve host: rancher.rancher.lab)

kubectl logs -n cattle-system cattle-78b54f84c5-vh5bp -f

time=“2018-10-03 20:56:48” level=info msg=“Telemetry Client v0.5.1”
time=“2018-10-03 20:56:48” level=info msg=“Listening on 0.0.0.0:8114”
2018/10/03 20:56:59 [INFO] Handling backend connection request [machine-nhk86]
2018/10/03 21:04:04 [INFO] 2018/10/03 21:04:04 http: TLS handshake error from 10.0.2.15:39070: tls: failed to sign ECDHE parameters: rsa: internal error
2018/10/03 21:14:27 [INFO] Handling backend connection request [machine-nhk86]
2018/10/03 21:16:14 [INFO] 2018/10/03 21:16:14 http: TLS handshake error from 10.0.2.15:41254: tls: failed to sign ECDHE parameters: rsa: internal error
2018/10/03 21:18:34 [INFO] 2018/10/03 21:18:34 http: TLS handshake error from 10.0.2.15:41662: tls: failed to sign ECDHE parameters: rsa: internal error
2018/10/03 21:19:04 [INFO] 2018/10/03 21:19:04 http: TLS handshake error from 10.0.2.15:41752: tls: failed to sign ECDHE parameters: rsa: internal error
2018/10/03 21:46:14 [INFO] 2018/10/03 21:46:14 http: TLS handshake error from 10.0.2.15:46580: tls: failed to sign ECDHE parameters: rsa: internal error
2018/10/04 07:11:25 [INFO] Handling backend connection request [machine-v6nvc]
2018/10/04 07:14:47 [INFO] Running cluster events cleanup
2018/10/04 07:14:47 [INFO] Done running cluster events cleanup
2018/10/04 07:20:21 [INFO] Purged 1 expired tokens
2018/10/04 07:34:09 [INFO] 2018/10/04 07:34:09 http: TLS handshake error from 10.0.2.15:52274: tls: failed to sign ECDHE parameters: rsa: internal error

kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE
cattle-system cattle-78b54f84c5-vh5bp 1/1 Running 1 16h
cattle-system cattle-cluster-agent-6f894484d9-qr9dj 0/1 CrashLoopBackOff 29 16h
cattle-system cattle-node-agent-8njn9 1/1 Running 10 16h
cattle-system cattle-node-agent-rhm4m 1/1 Running 23 16h
cattle-system cattle-node-agent-zhwtr 0/1 CrashLoopBackOff 29 16h
ingress-nginx default-http-backend-797c5bc547-b56th 1/1 Running 1 16h
ingress-nginx nginx-ingress-controller-67b8f 1/1 Running 1 16h
ingress-nginx nginx-ingress-controller-9f6p5 1/1 Running 1 16h
ingress-nginx nginx-ingress-controller-xn7xh 1/1 Running 1 16h
kube-system canal-2dbxd 3/3 Running 3 16h
kube-system canal-7wqm9 3/3 Running 3 16h
kube-system canal-psxzf 3/3 Running 3 16h
kube-system kube-dns-7588d5b5f5-bvdmg 3/3 Running 3 16h
kube-system kube-dns-autoscaler-5db9bbb766-k78kt 1/1 Running 1 16h
kube-system metrics-server-97bc649d5-rqnfs 1/1 Running 1 16h
kube-system rke-ingress-controller-deploy-job-xxjnf 0/1 Completed 0 16h
kube-system rke-kubedns-addon-deploy-job-l75q5 0/1 Completed 0 16h
kube-system rke-metrics-addon-deploy-job-bpvzb 0/1 Completed 0 16h
kube-system rke-network-plugin-deploy-job-tdq27 0/1 Completed 0 16h
kube-system rke-user-addon-deploy-job-68qlp 0/1 Completed 0 16h

I don’t know, why cattle-cluster-agent cannot resolve rancher.rancher.lab. DNS works on node level.

This is my rancher-cluster.yml I used with rke to deploy the cluster:
nodes:
- address: 192.168.33.10 # hostname or IP to access nodes
user: root # root user (usually ‘root’)
role: [controlplane,etcd,worker] # K8s roles for node
ssh_key_path: /root/.ssh/id_rsa # path to PEM file
- address: 192.168.33.11
user: root
role: [controlplane,etcd,worker]
ssh_key_path: /root/.ssh/id_rsa
- address: 192.168.33.12
user: root
role: [controlplane,etcd,worker]
ssh_key_path: /root/.ssh/id_rsa

services:
  etcd:
    snapshot: true
    creation: 6h
    retention: 24h

addons: |-
  ---
  kind: Namespace
  apiVersion: v1
  metadata:
    name: cattle-system
  ---
  kind: ServiceAccount
  apiVersion: v1
  metadata:
    name: cattle-admin
    namespace: cattle-system
  ---
  kind: ClusterRoleBinding
  apiVersion: rbac.authorization.k8s.io/v1
  metadata:
    name: cattle-crb
    namespace: cattle-system
  subjects:
  - kind: ServiceAccount
    name: cattle-admin
    namespace: cattle-system
  roleRef:
    kind: ClusterRole
    name: cluster-admin
    apiGroup: rbac.authorization.k8s.io
  ---
  apiVersion: v1
  kind: Secret
  metadata:
    name: cattle-keys-ingress
    namespace: cattle-system
  type: Opaque
  data:
    tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM3akNDQWxlZ0F3SUJBZ0lCQWpBTkJna3Foa2lHOXcwQkFRVUZBREJXTVFzd0NRWURWUVFHRXdKRFNERU0KTUFvR0ExVUVDQk1EV25Wbk1Rd3dDZ1lEVlFRSEV3TmFkV2N4RlRBVEJnTlZCQW9UREZKaGJtTm9aWElnVEdGaQpjekVVTUJJR0ExVUVBeE1MY21GdVkyaGxjaTVzWVdJd0hoY05NVGd4TURBek1UUTBNVEUyV2hjTk1qTXhNREF6Ck1UUTBNVEUyV2pCSE1Rc3dDUVlEVlFRR0V3SkRTREVNTUFvR0ExVUVDQk1EV25Wbk1Rd3dDZ1lEVlFRSEV3TmEKZFdjeEhEQWFCZ05WQkFNVEUzSmhibU5vWlhJdWNtRnVZMmhsY2k1c1lXSXdnZ0VpTUEwR0NTcUdTSWIzRFFFQgpBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQ3NNUWtzMFVneDZMNDhac1h3Q1ZKS3hvQXZiOCtHOHRMcWdITjR6ZzBUClBBVmYzMmFJV0hDOVdZcVpvRFpsY21wMnI4a3ZyQ3V6ZEJMa3F0YUNzRHN5aVVSS2drQzY5bkFpQm9RSTVwK1kKSUpzWDg0aXI5dC8vZXYrU0Q2TC9oaWxrS1YwWGtlb24vZ0g0VnVuU2FIaTNqU3ZweHk5S2k1dUdXeTZacG1vbQpxa3NtdTd6S3RwYUhXYUFGR283a1owaUlpeG44eHl6ZUpuNmt4NjEyWVp3NW9DazQvUmVVQXJKaVpxS3Z0MGxoCmgrVGoxSDEzamluVUgzLy9xazY4Q1dUWnBMdG8wSmdFdUh6endrcGlvK2dPS3paVDhjZlhyV0NpVE5rQ0lXVmgKeDM2MC9VczdubHJBM05jREd5ZmdMUWhqSGpSZENuazFrZUpGb1d3Z3JoRFpBZ01CQUFHalZ6QlZNQjhHQTFVZApJd1FZTUJhQUZQMGZPRDM3bkhaV2NVRHNuaGdlU01GeXBHMzhNQXdHQTFVZEV3RUIvd1FDTUFBd0R3WURWUjBQCkFRSC9CQVVEQXdlNEFEQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBVEFOQmdrcWhraUc5dzBCQVFVRkFBT0IKZ1FDRE4vbGVHR2JVOHplbDNFNEdyQ3d3cWV4K3VJZkRZRmV1M0hNU09lV1BIeURUR3VrZVJDd0lmVmFKd2I0Qgo4RFNqdnFhOGNrd29JNzRxbjlTTnJJclFHQ3pvU2RzZTVNVUFzckc2T0twZnJBK3FJUG5QSTljckNFcnRzamFmCk1KcTRuM3ViclJFMzJOM3g5WTMrMUFrMkw5VWtvMldlbUY5TldhamxLZi9Banc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== 
    tls.key: XXXX
  ---
  apiVersion: v1
  kind: Secret
  metadata:
    name: cattle-keys-server
    namespace: cattle-system
  type: Opaque
  data:
    cacerts.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNpRENDQWZHZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRVUZBREJXTVFzd0NRWURWUVFHRXdKRFNERU0KTUFvR0ExVUVDQk1EV25Wbk1Rd3dDZ1lEVlFRSEV3TmFkV2N4RlRBVEJnTlZCQW9UREZKaGJtTm9aWElnVEdGaQpjekVVTUJJR0ExVUVBeE1MY21GdVkyaGxjaTVzWVdJd0lCY05NVGd4TURBek1UUTBNREkyV2hnUE1qQTJNREEyCk1ETXhOVFF3TWpaYU1GWXhDekFKQmdOVkJBWVRBa05JTVF3d0NnWURWUVFJRXdOYWRXY3hEREFLQmdOVkJBY1QKQTFwMVp6RVZNQk1HQTFVRUNoTU1VbUZ1WTJobGNpQk1ZV0p6TVJRd0VnWURWUVFERXd0eVlXNWphR1Z5TG14aApZakNCbnpBTkJna3Foa2lHOXcwQkFRRUZBQU9CalFBd2dZa0NnWUVBdlIzbTFKS2pvNjFNZ3NMMndwNFFwQW4zCmZ6Q0FSR3J5aDRRWElrcHhJQmxUQUJXQld6MGUyTENTMG9YRlNrVFJPTjR5NXQvZHlZOEJxaDdtbHpwaVJJckkKZ0FHaEVtdytqOVMvZmREcDU0eG1hbVZYditHamtnckNPVDU5Mm50Z3pROGxsMjhjN2ptY3NpTkszclJGZWlCWQpvSWtJL2FhMWRBcUk5QmlxM0NzQ0F3RUFBYU5rTUdJd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBUEJnTlZIUThCCkFmOEVCUU1EQndZQU1CMEdBMVVkRGdRV0JCVDlIemc5KzV4MlZuRkE3SjRZSGtqQmNxUnQvREFmQmdOVkhTTUUKR0RBV2dCVDlIemc5KzV4MlZuRkE3SjRZSGtqQmNxUnQvREFOQmdrcWhraUc5dzBCQVFVRkFBT0JnUUNQaHk5NQpEYnZrMTdnaU4xMERUdU5tS2tpNTJ3cnVRRVZLUENnUE1aaERMd3g3eTFmUXU1R1FicVZPcTFwSmtjR1NTcHllCkxsYWc5a28rUlRhdmI3NVp0MVRGR0I4YS9TOUdaWjV4ZmxEd1RhdjhuU281TFFBaEt5THBtaGxPTzJQWUxya3gKMUdJVEtxc3VEcGw5U2FYQ3pqaVk3elUrNVdlUDJtbDFqTWVUR1E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  ---
  apiVersion: v1
  kind: Service
  metadata:
    namespace: cattle-system
    name: cattle-service
    labels:
      app: cattle
  spec:
    ports:
    - port: 80
      targetPort: 80
      protocol: TCP
      name: http
    - port: 443
      targetPort: 443
      protocol: TCP
      name: https
    selector:
      app: cattle
  ---
  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    namespace: cattle-system
    name: cattle-ingress-http
    annotations:
      nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
      nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"   # Max time in seconds for ws to remain shell window open
      nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"   # Max time in seconds for ws to remain shell window open
  spec:
    rules:
    - host: rancher.rancher.lab
      http:
        paths:
        - backend:
            serviceName: cattle-service
            servicePort: 80
    tls:
    - secretName: cattle-keys-ingress
      hosts:
      - rancher.rancher.lab
  ---
  kind: Deployment
  apiVersion: extensions/v1beta1
  metadata:
    namespace: cattle-system
    name: cattle
  spec:
    replicas: 1
    template:
      metadata:
        labels:
          app: cattle
      spec:
        serviceAccountName: cattle-admin
        containers:
        - image: rancher/rancher:v2.0.8
          imagePullPolicy: Always
          name: cattle-server
  #       env:
  #       - name: HTTP_PROXY
  #         value: "http://your_proxy_address:port"
  #       - name: HTTPS_PROXY
  #         value: "http://your_proxy_address:port"
  #       - name: NO_PROXY
  #         value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,your_network_ranges_that_dont_need_proxy_to_access"
          livenessProbe:
            httpGet:
              path: /ping
              port: 80
            initialDelaySeconds: 60
            periodSeconds: 60
          readinessProbe:
            httpGet:
              path: /ping
              port: 80
            initialDelaySeconds: 20
            periodSeconds: 10
          ports:
          - containerPort: 80
            protocol: TCP
          - containerPort: 443
            protocol: TCP
          volumeMounts:
          - mountPath: /etc/rancher/ssl
            name: cattle-keys-volume
            readOnly: true
        volumes:
        - name: cattle-keys-volume
          secret:
            defaultMode: 420
            secretName: cattle-keys-server

Ingress is configured correctly:

kubectl get ingress -n cattle-system
NAME                  HOSTS                 ADDRESS                                     PORTS     AGE
cattle-ingress-http   rancher.rancher.lab   192.168.33.10,192.168.33.11,192.168.33.12   80, 443   4h

I was able to replicate the issue with v2.0.8, v2.0.7, v.2.0.6.

Any help is appreciated.

cheers!

p.s. I put some " [placeholder] " in some links, otherwise this forum doesn’t allow me to post. I am limited to posts with only two links in it. lol.

It turns out, that the Kubernetes autom. generated ACME certs (/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate) are being used, instead of the CA and cert that I configured in rancher-cluster.yml.

The ingress config:
kubectl -n cattle-system describe ingress

Name: cattle-ingress-http
Namespace: cattle-system
Address: 192.168.33.10,192.168.33.11,192.168.33.12
Default backend: default-http-backend:80 ()
TLS:
cattle-keys-ingress terminates rancher.rancher.lab
Rules:
Host Path Backends


rancher.rancher.lab
cattle-service:80 ()
Annotations:
nginx [placeholder] .ingress.kubernetes.io/proxy-send-timeout: 1800
field [placeholder] .cattle.io/publicEndpoints: [{“addresses”:[“192.168.33.10”,“192.168.33.11”,“192.168.33.12”],“port”:443,“protocol”:“HTTPS”,“serviceName”:“cattle-system:cattle-service”,“ingressName”:“cattle-system:cattle-ingress-http”,“hostname”:“rancher.rancher.lab”,“allNodes”:false}]
kubectl [placeholder] .kubernetes.io/last-applied-configuration: {“apiVersion”:“extensions/v1beta1”,“kind”:“Ingress”,“metadata”:{“annotations”:{“nginx [placeholder] .ingress.kubernetes.io/proxy-connect-timeout”:“30”,“nginx [placeholder] .ingress.kubernetes.io/proxy-read-timeout”:“1800”,“nginx [placeholder] .ingress.kubernetes.io/proxy-send-timeout”:“1800”},“name”:“cattle-ingress-http”,“namespace”:“cattle-system”},“spec”:{“rules”:[{“host”:“rancher.rancher.lab”,“http”:{“paths”:[{“backend”:{“serviceName”:“cattle-service”,“servicePort”:80}}]}}],“tls”:[{“hosts”:[“rancher.rancher.lab”],“secretName”:“cattle-keys-ingress”}]}}

nginx [placeholder] .ingress.kubernetes.io/proxy-connect-timeout: 30
nginx [placeholder] .ingress.kubernetes.io/proxy-read-timeout: 1800
Events:
Type Reason Age From Message


Normal CREATE 25m nginx-ingress-controller Ingress cattle-system/cattle-ingress-http
Normal CREATE 25m nginx-ingress-controller Ingress cattle-system/cattle-ingress-http
Normal CREATE 25m nginx-ingress-controller Ingress cattle-system/cattle-ingress-http
Normal UPDATE 24m (x2 over 24m) nginx-ingress-controller Ingress cattle-system/cattle-ingress-http
Normal UPDATE 24m (x2 over 24m) nginx-ingress-controller Ingress cattle-system/cattle-ingress-http
Normal UPDATE 24m (x2 over 24m) nginx-ingress-controller Ingress cattle-system/cattle-ingress-http

The config mentions a backend on port 80, but not 443. Also, TLS seems to be correctly configured (TLS:
cattle-keys-ingress terminates rancher.rancher.lab).

Any help is appreciated.

Another update.

While debugging, I discovered the following logs:

kubectl logs -n cattle-system cattle-cluster-agent-6f894484d9-qr9dj -f
ERROR: https://rancher.rancher.lab/ping is not accessible (Could not resolve host: rancher.rancher.lab)

kubectl logs -n cattle-system cattle-node-agent-pzbrs
time="2018-10-04T19:29:43Z" level=error msg="Failed to connect to proxy" error="x509: certificate signed by unknown authority"

kubectl logs -n cattle-system cattle-node-agent-lghjd
time="2018-10-04T19:31:10Z" level=error msg="Failed to connect to proxy" error="x509: certificate signed by unknown authority"

kubectl logs -n cattle-system cattle-node-agent-46rbd
INFO: Using resolv.conf: nameserver 192.168.33.13 nameserver 10.0.2.3 search blah.url
INFO: https://rancher.rancher.lab/ping is accessible

kubectl logs -n cattle-system cattle-cluster-agent-6b6495f577-v2kqc
ERROR: https://rancher.rancher.lab/ping is not accessible (Could not resolve host: rancher.rancher.lab)

Based on this output I concluded that either DNS is not working properly within the cattle pods, or I still have issues with the self-signed certs I provided rke for the cluster deployment.

I was able to log in to one of the cattle-node-agent pods and from there SSH into my load balancer (where the serverl-url points to). This leads me to the conclusion, that DNS is working as intended.

Before another cattle-node-agent pod crashlooped, I managed to check the certificate from the server-url:

kubectl exec -ti -n cattle-system cattle-node-agent-pzbrs bash
node1:/# openssl s_client -host rancher.rancher.lab -port 443           
CONNECTED(00000003)                                                          
depth=0 O = Acme Co, CN = Kubernetes Ingress Controller Fake Certificate

Please note, that in my case the server-url points to a L4 load balancer, which transparently passes through traffic to all nodes.

Therefore I conclude, that the issue is caused by incorrect certificates rather than some DNS issues might be indicated in some logs above.

I still need to figure out what’s wrong with my certs…

The code that generates the not accessible message is https://github.com/rancher/rancher/blob/master/package/run.sh#L89 and does a curl to the configured server-url. You are not showing all the logs from all the agent pods, so I can’t tell why one can resolve and the other can’t. It helps to tell where your DNS server is running that is server rancher.lab and where it should be reached.

Using openssl to determine the certificate needs -servername, like openssl s_client -connect rancher.rancher.lab:443 -servername rancher.rancher.lab

There are some checks in https://github.com/superseb/rancher-check to check the configured certificates too, it usually comes down to wrong encoding or missing intermediates.

Hi superseb

Thanks for your support.
Here is some more info you requested. Hope it helps:

Setup:

root@node1:~# dig node1.rancher.lab +short
192.168.33.10
root@node1:~# dig node2.rancher.lab +short
192.168.33.11
root@node1:~# dig node3.rancher.lab +short
192.168.33.12
root@node1:~# dig lb.rancher.lab +short
192.168.33.14
root@node1:~# dig rancher.rancher.lab +short
192.168.33.14
root@node1:~# dig resolver.rancher.lab +short
192.168.33.13

Connectivity:

root@node1:~# openssl s_client -host node1.rancher.lab -port 443 2>&1 | grep subject
subject=/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate
^C
root@node1:~# openssl s_client -host node2.rancher.lab -port 443 2>&1 | grep subject
subject=/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate
^C
root@node1:~# openssl s_client -host node3.rancher.lab -port 443 2>&1 | grep subject
subject=/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate
^C
root@node1:~# openssl s_client -host lb.rancher.lab -port 443 2>&1 | grep subject
subject=/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate
^C
root@node1:~# ping -n 1 resolver.rancher.lab
connect: Invalid argument
root@node1:~# ping -t 1 resolver.rancher.lab
PING resolver.rancher.lab (192.168.33.13) 56(84) bytes of data.
64 bytes from resolver.rancher.lab (192.168.33.13): icmp_seq=1 ttl=64 time=0.839 ms

Pods:

root@node1:~# kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY   STATUS             RESTARTS   AGE
cattle-system   cattle-78b54f84c5-dfkd9                   1/1     Running            0          29m
cattle-system   cattle-cluster-agent-868f5b6d7c-85668     0/1     CrashLoopBackOff   7          16m
cattle-system   cattle-node-agent-dx9nr                   0/1     CrashLoopBackOff   7          16m
cattle-system   cattle-node-agent-fh5mq                   1/1     Running            1          16m
cattle-system   cattle-node-agent-rk9wd                   1/1     Running            7          16m
ingress-nginx   default-http-backend-797c5bc547-7956c     1/1     Running            0          29m
ingress-nginx   nginx-ingress-controller-4774n            1/1     Running            0          29m
ingress-nginx   nginx-ingress-controller-kg2s9            1/1     Running            0          29m
ingress-nginx   nginx-ingress-controller-wpw6d            1/1     Running            0          29m
kube-system     canal-bwlhv                               3/3     Running            0          29m
kube-system     canal-dh22m                               3/3     Running            0          29m
kube-system     canal-srwm4                               3/3     Running            0          29m
kube-system     kube-dns-7588d5b5f5-mnkcn                 3/3     Running            0          29m
kube-system     kube-dns-autoscaler-5db9bbb766-j5845      1/1     Running            0          29m
kube-system     metrics-server-97bc649d5-x47dh            1/1     Running            0          29m
kube-system     rke-ingress-controller-deploy-job-kgz6q   0/1     Completed          0          29m
kube-system     rke-kubedns-addon-deploy-job-6hgn2        0/1     Completed          0          29m
kube-system     rke-metrics-addon-deploy-job-rxmpq        0/1     Completed          0          29m
kube-system     rke-network-plugin-deploy-job-tvhc9       0/1     Completed          0          29m
kube-system     rke-user-addon-deploy-job-td5mg           0/1     Completed          0          29m

Logs from cattle pods:

root@node1:~# kubectl logs -n cattle-system cattle-78b54f84c5-dfkd9                                                                                                                                                                                           
2018/10/05 17:49:38 [INFO] Rancher version v2.0.8 is starting                                                                            
2018/10/05 17:49:38 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:auto Embedded:false KubeConfig: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false NoCACerts:false ListenConfig:<nil> AuditLogPath:/var/log/auditlog/rancher-api-audit.log A
uditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0}                   
2018/10/05 17:49:38 [INFO] Listening on /tmp/log.sock                                                                                  
2018/10/05 17:49:38 [INFO] Activating driver gke                                                                                         
2018/10/05 17:49:38 [INFO] Activating driver gke done                                                                      
2018/10/05 17:49:38 [INFO] Activating driver aks                                                                           
2018/10/05 17:49:38 [INFO] Activating driver aks done                                                                                                 
2018/10/05 17:49:38 [INFO] Activating driver eks                                                                                     
2018/10/05 17:49:38 [INFO] Activating driver eks done                                                                                  
2018/10/05 17:49:38 [INFO] Activating driver import                                                                                                                                                                                    
2018/10/05 17:49:38 [INFO] Activating driver import done                                                                                                                                                        
2018/10/05 17:49:38 [INFO] Activating driver rke                                 
<snip>
2018/10/05 17:49:51 [INFO] Creating roleBinding User user-b6nhb Role project-owner
2018/10/05 17:49:51 [INFO] Creating roleBinding User user-b6nhb Role project-owner
2018/10/05 17:49:51 [ERROR] error updating ns p-mh8s8 status: Operation cannot be fulfilled on namespaces "p-mh8s8": the object has been modified; please apply your changes to the latest version and try again
2018/10/05 17:49:51 [INFO] Creating clusterRole project-owner-promoted for project access to global resource.
2018/10/05 17:49:51 [INFO] namespaceHandler: addProjectIDLabelToNamespace: adding label field.cattle.io/projectId=p-mh8s8 to namespace=kube-public
2018/10/05 17:49:51 [INFO] Creating roleBinding User user-b6nhb Role admin
2018/10/05 17:49:51 [INFO] uploading vmwarevsphereConfig to node schema
2018/10/05 17:49:51 [INFO] Created machine for node [192.168.33.12]
2018/10/05 17:49:51 [INFO] Creating roleBinding User user-b6nhb Role admin
2018/10/05 17:49:51 [INFO] Creating roleBinding User user-b6nhb Role admin
2018/10/05 17:49:51 [INFO] namespaceHandler: addProjectIDLabelToNamespace: adding label field.cattle.io/projectId=p-hwgxc to namespace=default
2018/10/05 17:49:51 [ERROR] ProjectController local/p-hwgxc [project-namespace-auth] failed with : clusterroles.rbac.authorization.k8s.io "p-hwgxc-namespaces-edit" already exists
2018/10/05 17:49:51 [INFO] uploading azureConfig to node schema
2018/10/05 17:49:51 [INFO] Updating clusterRole project-owner-promoted for project access to global resource.
2018/10/05 17:49:51 [INFO] Updating clusterRole project-owner-promoted for project access to global resource.
2018/10/05 17:49:51 [INFO] Creating globalRoleBindings for u-b4qkhsnliz
2018/10/05 17:49:51 [INFO] uploading azureConfig to node schema
2018/10/05 17:49:51 [INFO] namespaceHandler: addProjectIDLabelToNamespace: adding label field.cattle.io/projectId=p-mh8s8 to namespace=ingress-nginx
2018/10/05 17:49:51 [INFO] Creating roleBinding User user-b6nhb Role project-owner
2018/10/05 17:49:51 [INFO] Creating clusterRole for roleTemplate Create Namespaces (create-ns).
2018/10/05 17:49:51 [INFO] Creating roleBinding User user-b6nhb Role project-owner
2018/10/05 17:49:51 [INFO] namespaceHandler: addProjectIDLabelToNamespace: adding label field.cattle.io/projectId=p-mh8s8 to namespace=cattle-system
2018/10/05 17:49:51 [INFO] Created machine for node [192.168.33.10]
2018/10/05 17:49:51 [INFO] Creating roleBinding User user-b6nhb Role project-owner
2018/10/05 17:49:51 [INFO] Creating roleBinding User user-b6nhb Role admin
2018/10/05 17:49:51 [INFO] Creating clusterRoleBinding for project access to global resource for subject user-b6nhb role p-hwgxc-namespaces-edit.
2018/10/05 17:49:51 [INFO] Creating roleBinding User user-b6nhb Role admin
2018/10/05 17:49:51 [INFO] Creating new GlobalRoleBinding for GlobalRoleBinding grb-nwkd6
2018/10/05 17:49:51 [INFO] [mgmt-auth-grb-controller] Creating clusterRoleBinding for globalRoleBinding grb-nwkd6 for user u-b4qkhsnliz with role cattle-globalrole-user
2018/10/05 17:49:51 [INFO] Creating clusterRoleBinding for project access to global resource for subject user-b6nhb role project-owner-promoted.
2018/10/05 17:49:51 [INFO] Creating roleBinding User user-b6nhb Role admin
2018/10/05 17:49:51 [INFO] Creating clusterRoleBinding for project access to global resource for subject user-b6nhb role p-mh8s8-namespaces-edit.
2018/10/05 17:49:51 [INFO] Creating token for user u-b4qkhsnliz
2018/10/05 17:49:51 [INFO] Creating clusterRoleBinding for project access to global resource for subject user-b6nhb role create-ns.
2018/10/05 17:49:51 [INFO] Creating roleBinding User u-b4qkhsnliz Role cluster-owner
2018/10/05 17:49:52 [INFO] [mgmt-auth-crtb-controller] Creating clusterRoleBinding for membership in cluster local for subject u-b4qkhsnliz
2018/10/05 17:49:52 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject u-b4qkhsnliz with role cluster-owner in namespace
2018/10/05 17:49:52 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject u-b4qkhsnliz with role cluster-owner in namespace
2018/10/05 17:49:52 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject u-b4qkhsnliz with role cluster-owner in namespace
2018/10/05 17:49:52 [ERROR] ClusterController local [cluster-deploy] failed with : waiting for server-url setting to be set
2018/10/05 17:49:52 [INFO] Updating clusterRoleBinding clusterrolebinding-xl64b for project access to global resource for subject user-b6nhb role project-owner-promoted.
2018/10/05 17:49:52 [ERROR] ClusterController local [cluster-deploy] failed with : waiting for server-url setting to be set
2018/10/05 17:49:52 [ERROR] ClusterController local [cluster-deploy] failed with : waiting for server-url setting to be set
2018/10/05 17:49:52 [ERROR] ClusterController local [cluster-deploy] failed with : waiting for server-url setting to be set
2018/10/05 17:49:52 [INFO] Deleting roleBinding clusterrolebinding-lctxc
2018/10/05 17:49:52 [ERROR] ClusterController local [cluster-deploy] failed with : waiting for server-url setting to be set
2018/10/05 17:49:52 [INFO] Updating clusterRoleBinding clusterrolebinding-75rz2 for project access to global resource for subject user-b6nhb role create-ns.
2018/10/05 17:49:53 [INFO] namespaceHandler: addProjectIDLabelToNamespace: adding label field.cattle.io/projectId=p-mh8s8 to namespace=kube-system
2018/10/05 17:49:56 [ERROR] ClusterController local [cluster-deploy] failed with : waiting for server-url setting to be set
2018/10/05 17:50:04 [ERROR] ClusterController local [cluster-deploy] failed with : waiting for server-url setting to be set
2018/10/05 17:50:11 [INFO] Updating catalog library
2018/10/05 17:50:12 [ERROR] ClusterController local [cluster-deploy] failed with : waiting for server-url setting to be set
2018/10/05 17:50:39 [INFO] Catalog sync done. 25 templates created, 0 templates updated, 0 templates deleted
2018/10/05 17:51:16 [ERROR] ClusterController local [cluster-deploy] failed with : waiting for server-url setting to be set
2018/10/05 17:53:24 [ERROR] ClusterController local [cluster-deploy] failed with : waiting for server-url setting to be set
2018/10/05 17:54:19 [ERROR] ClusterController local [cluster-deploy] failed with : waiting for server-url setting to be set
2018/10/05 17:54:34 [ERROR] ClusterController local [cluster-deploy] failed with : waiting for server-url setting to be set
2018/10/05 17:57:40 [ERROR] ClusterController local [cluster-deploy] failed with : waiting for server-url setting to be set
W1005 17:58:21.038673       7 reflector.go:341] github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:139: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old.
2018/10/05 17:59:42 [INFO] Creating token for user user-b6nhb
time="2018-10-05 17:59:42" level=info msg="Telemetry Client v0.5.1"
time="2018-10-05 17:59:42" level=info msg="Listening on 0.0.0.0:8114"
2018/10/05 17:59:45 [ERROR] error updating ns user-b6nhb status: Operation cannot be fulfilled on namespaces "user-b6nhb": the object has been modified; please apply your changes to the latest version and try again

root@node1:~# kubectl logs -n cattle-system cattle-cluster-agent-868f5b6d7c-85668
INFO: Environment: CATTLE_ADDRESS=10.42.1.2 CATTLE_CA_CHECKSUM=ee5d5c27aa9e621a0dfbcc589633e98d527722f4b9e30cfef516cac99b66826c CATTLE_CLUSTER=true CATTLE_INTERNAL_ADDRESS= CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-868f5b6d7c-85668 CATTLE_SERVER=https://rancher.rancher.lab CATTLE_SERVICE_PORT=tcp://10.43.110.102:80 CATTLE_SERVICE_PORT_443_TCP=tcp://10.43.110.102:443 CATTLE_SERVICE_PORT_443_TCP_ADDR=10.43.110.102 CATTLE_SERVICE_PORT_443_TCP_PORT=443 CATTLE_SERVICE_PORT_443_TCP_PROTO=tcp CATTLE_SERVICE_PORT_80_TCP=tcp://10.43.110.102:80 CATTLE_SERVICE_PORT_80_TCP_ADDR=10.43.110.102 CATTLE_SERVICE_PORT_80_TCP_PORT=80 CATTLE_SERVICE_PORT_80_TCP_PROTO=tcp CATTLE_SERVICE_SERVICE_HOST=10.43.110.102 CATTLE_SERVICE_SERVICE_PORT=80 CATTLE_SERVICE_SERVICE_PORT_HTTP=80 CATTLE_SERVICE_SERVICE_PORT_HTTPS=443
INFO: Using resolv.conf: nameserver 10.43.0.10 search cattle-system.svc.cluster.local svc.cluster.local cluster.local 0x06.io options ndots:5
ERROR: https://rancher.rancher.lab/ping is not accessible (Could not resolve host: rancher.rancher.lab)

root@node1:~# kubectl logs -n cattle-system cattle-node-agent-dx9nr
INFO: Environment: CATTLE_ADDRESS=10.0.2.15 CATTLE_AGENT_CONNECT=true CATTLE_CA_CHECKSUM=ee5d5c27aa9e621a0dfbcc589633e98d527722f4b9e30cfef516cac99b66826c CATTLE_CLUSTER=false CATTLE_INTERNAL_ADDRESS= CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=192.168.33.12 CATTLE_SERVER=https://rancher.rancher.lab CATTLE_SERVICE_PORT=tcp://10.43.110.102:80 CATTLE_SERVICE_PORT_443_TCP=tcp://10.43.110.102:443 CATTLE_SERVICE_PORT_443_TCP_ADDR=10.43.110.102 CATTLE_SERVICE_PORT_443_TCP_PORT=443 CATTLE_SERVICE_PORT_443_TCP_PROTO=tcp CATTLE_SERVICE_PORT_80_TCP=tcp://10.43.110.102:80 CATTLE_SERVICE_PORT_80_TCP_ADDR=10.43.110.102 CATTLE_SERVICE_PORT_80_TCP_PORT=80 CATTLE_SERVICE_PORT_80_TCP_PROTO=tcp CATTLE_SERVICE_SERVICE_HOST=10.43.110.102 CATTLE_SERVICE_SERVICE_PORT=80 CATTLE_SERVICE_SERVICE_PORT_HTTP=80 CATTLE_SERVICE_SERVICE_PORT_HTTPS=443
INFO: Using resolv.conf: nameserver 192.168.33.13 nameserver 10.0.2.3 search 0x06.io
INFO: https://rancher.rancher.lab/ping is accessible
ERROR: Failed to pull the cacert from the rancher server settings at https://rancher.rancher.lab/v3/settings/cacerts

root@node1:~# kubectl logs -n cattle-system cattle-node-agent-fh5mq                                        
INFO: Environment: CATTLE_ADDRESS=10.0.2.15 CATTLE_AGENT_CONNECT=true CATTLE_CA_CHECKSUM=ee5d5c27aa9e621a0dfbcc589633e98d527722f4b9e30cfef516cac99b66826c CATTLE_CLUSTER=false CATTLE_INTERNAL_ADDRESS= CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=192.168.33.10
 CATTLE_SERVER=https://rancher.rancher.lab CATTLE_SERVICE_PORT=tcp://10.43.110.102:80 CATTLE_SERVICE_PORT_443_TCP=tcp://10.43.110.102:443 CATTLE_SERVICE_PORT_443_TCP_ADDR=10.43.110.102 CATTLE_SERVICE_PORT_443_TCP_PORT=443 CATTLE_SERVICE_PORT_443_TCP_PROT
O=tcp CATTLE_SERVICE_PORT_80_TCP=tcp://10.43.110.102:80 CATTLE_SERVICE_PORT_80_TCP_ADDR=10.43.110.102 CATTLE_SERVICE_PORT_80_TCP_PORT=80 CATTLE_SERVICE_PORT_80_TCP_PROTO=tcp CATTLE_SERVICE_SERVICE_HOST=10.43.110.102 CATTLE_SERVICE_SERVICE_PORT=80 CATTLE_
SERVICE_SERVICE_PORT_HTTP=80 CATTLE_SERVICE_SERVICE_PORT_HTTPS=443                                         
INFO: Using resolv.conf: nameserver 192.168.33.13 nameserver 10.0.2.3 search 0x06.io                                          
INFO: https://rancher.rancher.lab/ping is accessible                                                                          
INFO: Value from https://rancher.rancher.lab/v3/settings/cacerts is an x509 certificate                                                                          
time="2018-10-05T18:01:47Z" level=info msg="Rancher agent version v2.0.8 is starting"                      
time="2018-10-05T18:01:47Z" level=info msg="Listening on /tmp/log.sock"                                                       
time="2018-10-05T18:01:47Z" level=info msg="Option customConfig=map[address:10.0.2.15 internalAddress: roles:[] label:map[]]" 
time="2018-10-05T18:01:47Z" level=info msg="Option etcd=false"                                                                                                   
time="2018-10-05T18:01:47Z" level=info msg="Option controlPlane=false"                                     
time="2018-10-05T18:01:47Z" level=info msg="Option worker=false"                                                              
time="2018-10-05T18:01:47Z" level=info msg="Option requestedHostname=192.168.33.10"                                           
time="2018-10-05T18:01:47Z" level=info msg="Connecting to wss://rancher.rancher.lab/v3/connect with token 6tp7n84vg66rdh4zcx46gjpvk9jbdz8gmtnt4fxvq7xll67p5zmlch"
time="2018-10-05T18:01:47Z" level=info msg="Connecting to proxy" url="wss://rancher.rancher.lab/v3/connect"
time="2018-10-05T18:01:47Z" level=error msg="Failed to connect to proxy" error="x509: certificate signed by unknown authority"
time="2018-10-05T18:01:47Z" level=error msg="Failed to connect to proxy" error="x509: certificate signed by unknown authority"
time="2018-10-05T18:01:57Z" level=info msg="Connecting to wss://rancher.rancher.lab/v3/connect with token 6tp7n84vg66rdh4zcx46gjpvk9jbdz8gmtnt4fxvq7xll67p5zmlch"
time="2018-10-05T18:01:57Z" level=info msg="Connecting to proxy" url="wss://rancher.rancher.lab/v3/connect"
time="2018-10-05T18:01:57Z" level=error msg="Failed to connect to proxy" error="x509: certificate signed by unknown authority"
time="2018-10-05T18:01:57Z" level=error msg="Failed to connect to proxy" error="x509: certificate signed by unknown authority"
time="2018-10-05T18:02:07Z" level=info msg="Connecting to wss://rancher.rancher.lab/v3/connect with token 6tp7n84vg66rdh4zcx46gjpvk9jbdz8gmtnt4fxvq7xll67p5zmlch"
time="2018-10-05T18:02:07Z" level=info msg="Connecting to proxy" url="wss://rancher.rancher.lab/v3/connect"
time="2018-10-05T18:02:07Z" level=error msg="Failed to connect to proxy" error="x509: certificate signed by unknown authority"
time="2018-10-05T18:02:07Z" level=error msg="Failed to connect to proxy" error="x509: certificate signed by unknown authority"
time="2018-10-05T18:02:17Z" level=info msg="Connecting to wss://rancher.rancher.lab/v3/connect with token 6tp7n84vg66rdh4zcx46gjpvk9jbdz8gmtnt4fxvq7xll67p5zmlch"

root@node1:~# kubectl logs -n cattle-system cattle-node-agent-rk9wd                                        
INFO: Environment: CATTLE_ADDRESS=10.0.2.15 CATTLE_AGENT_CONNECT=true CATTLE_CA_CHECKSUM=ee5d5c27aa9e621a0dfbcc589633e98d527722f4b9e30cfef516cac99b66826c CATTLE_CLUSTER=false CATTLE_INTERNAL_ADDRESS= CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=192.168.33.11
 CATTLE_SERVER=https://rancher.rancher.lab CATTLE_SERVICE_PORT=tcp://10.43.110.102:80 CATTLE_SERVICE_PORT_443_TCP=tcp://10.43.110.102:443 CATTLE_SERVICE_PORT_443_TCP_ADDR=10.43.110.102 CATTLE_SERVICE_PORT_443_TCP_PORT=443 CATTLE_SERVICE_PORT_443_TCP_PROT
O=tcp CATTLE_SERVICE_PORT_80_TCP=tcp://10.43.110.102:80 CATTLE_SERVICE_PORT_80_TCP_ADDR=10.43.110.102 CATTLE_SERVICE_PORT_80_TCP_PORT=80 CATTLE_SERVICE_PORT_80_TCP_PROTO=tcp CATTLE_SERVICE_SERVICE_HOST=10.43.110.102 CATTLE_SERVICE_SERVICE_PORT=80 CATTLE_
SERVICE_SERVICE_PORT_HTTP=80 CATTLE_SERVICE_SERVICE_PORT_HTTPS=443                                         
INFO: Using resolv.conf: nameserver 192.168.33.13 nameserver 10.0.2.3 search 0x06.io                                          
INFO: https://rancher.rancher.lab/ping is accessible                                                                          
INFO: Value from https://rancher.rancher.lab/v3/settings/cacerts is an x509 certificate                                                                          
time="2018-10-05T18:15:45Z" level=info msg="Rancher agent version v2.0.8 is starting"                      
time="2018-10-05T18:15:45Z" level=info msg="Option controlPlane=false"                                                        
time="2018-10-05T18:15:45Z" level=info msg="Option worker=false"                                                              
time="2018-10-05T18:15:45Z" level=info msg="Option requestedHostname=192.168.33.11"                                                                              
time="2018-10-05T18:15:45Z" level=info msg="Listening on /tmp/log.sock"                                    
time="2018-10-05T18:15:45Z" level=info msg="Option customConfig=map[label:map[] address:10.0.2.15 internalAddress: roles:[]]" 
time="2018-10-05T18:15:45Z" level=info msg="Option etcd=false"                                                                
time="2018-10-05T18:15:45Z" level=info msg="Connecting to wss://rancher.rancher.lab/v3/connect with token 6tp7n84vg66rdh4zcx46gjpvk9jbdz8gmtnt4fxvq7xll67p5zmlch"
time="2018-10-05T18:15:45Z" level=info msg="Connecting to proxy" url="wss://rancher.rancher.lab/v3/connect"
time="2018-10-05T18:15:45Z" level=error msg="Failed to connect to proxy" error="x509: certificate signed by unknown authority"
time="2018-10-05T18:15:45Z" level=error msg="Failed to connect to proxy" error="x509: certificate signed by unknown authority"
time="2018-10-05T18:15:55Z" level=info msg="Connecting to wss://rancher.rancher.lab/v3/connect with token 6tp7n84vg66rdh4zcx46gjpvk9jbdz8gmtnt4fxvq7xll67p5zmlch"
time="2018-10-05T18:15:55Z" level=info msg="Connecting to proxy" url="wss://rancher.rancher.lab/v3/connect"
time="2018-10-05T18:15:55Z" level=error msg="Failed to connect to proxy" error="x509: certificate signed by unknown authority"
time="2018-10-05T18:15:55Z" level=error msg="Failed to connect to proxy" error="x509: certificate signed by unknown authority"
time="2018-10-05T18:16:05Z" level=info msg="Connecting to wss://rancher.rancher.lab/v3/connect with token 6tp7n84vg66rdh4zcx46gjpvk9jbdz8gmtnt4fxvq7xll67p5zmlch"

Please note, that is a new rke deployment. This time, none of the cattle pods are able to connect to the server-url (rancher.rancher.lab). No config changes have been made since the last deployment. I just restored a Virtualbox Snapshot and executed “rke up” again.

I did not know, that openssl s_client needs -servername. Doing it this way, it looks like the correct cert is in use:

root@node1:~# openssl s_client -connect rancher.rancher.lab:443 -servername rancher.rancher.lab
CONNECTED(00000003)
depth=0 CN = rancher.rancher.lab
verify error:num=18:self signed certificate
verify return:1                                     
depth=0 CN = rancher.rancher.lab
verify return:1
---          
Certificate chain
 0 s:/CN=rancher.rancher.lab
   i:/CN=rancher.rancher.lab
---
Server certificate
-----BEGIN CERTIFICATE-----
MIICzDCCAbSgAwIBAgIBAjANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNyYW5j
aGVyLnJhbmNoZXIubGFiMB4XDTE4MTAwNDE1MjQwMFoXDTE5MTAwNDE1MjIwMFow
HjEcMBoGA1UEAxMTcmFuY2hlci5yYW5jaGVyLmxhYjCCASIwDQYJKoZIhvcNAQEB
BQADggEPADCCAQoCggEBANx9lMHuha68GSA03B6DSRduYvKy7ChDOwQrWjJgaCRx
rvNg/YDLSNK9rSfGwRQdihh/iHbNxDUU6dy30cykqyQ+7BJllqT8bwP7m/bvnRtT
nVmJiuH+OkGsZ4bmieohGSK1AcP8AbYm0eO3oGeMore2YvtF2Z+qvJ0nuLr/aU0S
HkWfEumqE/lVc02xqsjwsH+ODbe7JeZmsPVG9Zc8CscRCU9LQIYNtJA+Fn1+YN8X
lGR4guz9qf50DYXzD0dN0HOnNQV82QGQJGL8fvHM4KUer9agCJ7pGIJums/0vTeE
nfgBJBG3pVoZLBUu7V+NDuTZUv+Z6cAMxZxlcvUbmUECAwEAAaMVMBMwEQYJYIZI
AYb4QgEBBAQDAgZAMA0GCSqGSIb3DQEBCwUAA4IBAQBKuvA66BcCgGI3ph2PpMA9
hBVlbx4u4R5O/Y9V2a9x5WFx8Gc7+M7tVqs/LOJAHQ97oM0gXHO0OVkS56lUQU8i
QvA0ZP/LwLQbvy3JEwZziCDKB4GVbS5GL0nP/7hsmwsDsoYl/qK8J/+nLkDWeoPS
1Sd+65VvP9iB0ZL2Uj/VlTmprCvQqeBYFetrSdV+eJYci6o5n4PSjTgiOW1yFeyw
rUpMZ36eSMxMFLREpbpcRwqoRqaHlaChuBswyBJkUANuASezXlcGEyXVDKhRSvkk
uAB2+S4d2mSkqAfsakoMbKb8X1xZHLQ29hEozfd625sjr8HDDyHCtrwaqsQUrrfc
-----END CERTIFICATE-----
subject=/CN=rancher.rancher.lab
issuer=/CN=rancher.rancher.lab
---
No client certificate CA names sent
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 1410 bytes and written 459 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: 52AE66F47FF5001D342F304CC79947D9A43621F2CC6C902EA9B34C29CFC1E495
    Session-ID-ctx:
    Master-Key: E16CDF43E233B041C8681A90E337061BCBCF4286F238DDC7FB3A5B8A883806E4004B1F8C0F5749431E7536AB20FC9F2D
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 600 (seconds)

The issuer of this cert is:
root@node1:~# echo "-----BEGIN CERTIFICATE-----
MIICzDCCAbSgAwIBAgIBAjANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNyYW5j
aGVyLnJhbmNoZXIubGFiMB4XDTE4MTAwNDE1MjQwMFoXDTE5MTAwNDE1MjIwMFow
HjEcMBoGA1UEAxMTcmFuY2hlci5yYW5jaGVyLmxhYjCCASIwDQYJKoZIhvcNAQEB
BQADggEPADCCAQoCggEBANx9lMHuha68GSA03B6DSRduYvKy7ChDOwQrWjJgaCRx
rvNg/YDLSNK9rSfGwRQdihh/iHbNxDUU6dy30cykqyQ+7BJllqT8bwP7m/bvnRtT
nVmJiuH+OkGsZ4bmieohGSK1AcP8AbYm0eO3oGeMore2YvtF2Z+qvJ0nuLr/aU0S
HkWfEumqE/lVc02xqsjwsH+ODbe7JeZmsPVG9Zc8CscRCU9LQIYNtJA+Fn1+YN8X
lGR4guz9qf50DYXzD0dN0HOnNQV82QGQJGL8fvHM4KUer9agCJ7pGIJums/0vTeE
nfgBJBG3pVoZLBUu7V+NDuTZUv+Z6cAMxZxlcvUbmUECAwEAAaMVMBMwEQYJYIZI
AYb4QgEBBAQDAgZAMA0GCSqGSIb3DQEBCwUAA4IBAQBKuvA66BcCgGI3ph2PpMA9
hBVlbx4u4R5O/Y9V2a9x5WFx8Gc7+M7tVqs/LOJAHQ97oM0gXHO0OVkS56lUQU8i
QvA0ZP/LwLQbvy3JEwZziCDKB4GVbS5GL0nP/7hsmwsDsoYl/qK8J/+nLkDWeoPS
1Sd+65VvP9iB0ZL2Uj/VlTmprCvQqeBYFetrSdV+eJYci6o5n4PSjTgiOW1yFeyw
rUpMZ36eSMxMFLREpbpcRwqoRqaHlaChuBswyBJkUANuASezXlcGEyXVDKhRSvkk
uAB2+S4d2mSkqAfsakoMbKb8X1xZHLQ29hEozfd625sjr8HDDyHCtrwaqsQUrrfc
-----END CERTIFICATE-----
" | openssl x509 -text | grep Issuer
Issuer: CN=rancher.rancher.lab

I went to settings/advanced/certs in the webui and checked the ca cert, which I copied here:
-----BEGIN CERTIFICATE-----
MIICyjCCAbKgAwIBAgIBAjANBgkqhkiG9w0BAQsFADAWMRQwEgYDVQQDEwtyYW5j
aGVyLmxhYjAeFw0xODEwMDQxNTIyMDBaFw0xOTEwMDQxNTIyMDBaMBYxFDASBgNV
BAMTC3JhbmNoZXIubGFiMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
8CAK9Ijb4D/dF0IhrKtx2kHlrbP5bgAgw2SCzcnTn+72HeXYgg5Wby8cXrSZNw2p
thN1/VMObWMTn5U6pIO0nrsxsp4PUHpoyT3nlpK5lZzooxjRpIQwpgzNOjzhbf9Z
txtsuV4OGdGvSO29GFTOjwNZW8/FeysM87eKkxK7I5GuP3n63IuS62MujxCThN89
+64AGPiIdfCIij0E00oTVJoreoT/rKSrkKZx/lcQBeDyA0wk1g1LVhOgoqIafuUj
Me0CW0x+GTF97Jlg4D69hQhX94Tn7HC4NPabRMmrvOKyp0M+x8DCPZng5ON0VQxC
PsuYlMTYyJSUpB9Z0uSVxQIDAQABoyMwITAMBgNVHRMEBTADAQH/MBEGCWCGSAGG
+EIBAQQEAwIABTANBgkqhkiG9w0BAQsFAAOCAQEAGSWc1+q8GBYpQNkyeNeo8+Sg
AJaFUDhn1sn6TlVxETKm0vlKH76lpVr73UjSGN1S8hDPZrRz2BqyWFODRyyQXy9C
Ijokqf4y9NgB/mkog33ZMuUU/sPI0z1Fehh5JfuGZzEcMbEJA7E7UvybP+cfSRDa
asmUd/bKOUsxIspGpMvV0SSclD01VLPCCPO4ut17kAqCZmN8GfbP8zYafQ33mFym
XBQM1G9fipd9Jgh4czZNdd54ZyLUR+RyvrXLtIT1qLPV7cdHjHlZ715eXhnaQ+vG
xykllAQlcVMxpidKViX2XCvbxnWniuJPkz9TA3ED/Tyx3uLqQjwTSuuq589ojw==
-----END CERTIFICATE-----

The issuer of this cert is:
root@node1:~# echo “-----BEGIN CERTIFICATE-----
> MIICyjCCAbKgAwIBAgIBAjANBgkqhkiG9w0BAQsFADAWMRQwEgYDVQQDEwtyYW5j
> aGVyLmxhYjAeFw0xODEwMDQxNTIyMDBaFw0xOTEwMDQxNTIyMDBaMBYxFDASBgNV
> BAMTC3JhbmNoZXIubGFiMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
> 8CAK9Ijb4D/dF0IhrKtx2kHlrbP5bgAgw2SCzcnTn+72HeXYgg5Wby8cXrSZNw2p
> thN1/VMObWMTn5U6pIO0nrsxsp4PUHpoyT3nlpK5lZzooxjRpIQwpgzNOjzhbf9Z
> txtsuV4OGdGvSO29GFTOjwNZW8/FeysM87eKkxK7I5GuP3n63IuS62MujxCThN89
> +64AGPiIdfCIij0E00oTVJoreoT/rKSrkKZx/lcQBeDyA0wk1g1LVhOgoqIafuUj
> Me0CW0x+GTF97Jlg4D69hQhX94Tn7HC4NPabRMmrvOKyp0M+x8DCPZng5ON0VQxC
> PsuYlMTYyJSUpB9Z0uSVxQIDAQABoyMwITAMBgNVHRMEBTADAQH/MBEGCWCGSAGG
> +EIBAQQEAwIABTANBgkqhkiG9w0BAQsFAAOCAQEAGSWc1+q8GBYpQNkyeNeo8+Sg
> AJaFUDhn1sn6TlVxETKm0vlKH76lpVr73UjSGN1S8hDPZrRz2BqyWFODRyyQXy9C
> Ijokqf4y9NgB/mkog33ZMuUU/sPI0z1Fehh5JfuGZzEcMbEJA7E7UvybP+cfSRDa
> asmUd/bKOUsxIspGpMvV0SSclD01VLPCCPO4ut17kAqCZmN8GfbP8zYafQ33mFym
> XBQM1G9fipd9Jgh4czZNdd54ZyLUR+RyvrXLtIT1qLPV7cdHjHlZ715eXhnaQ+vG
> xykllAQlcVMxpidKViX2XCvbxnWniuJPkz9TA3ED/Tyx3uLqQjwTSuuq589ojw==
> -----END CERTIFICATE-----” | openssl x509 -text | grep Issuer
Issuer: CN=rancher.lab

This looks good to me. I don’t have any intermediate CAs.

Also, the cert is signed by the CA cert:
root@node1:~# openssl verify -verbose -CAfile /vagrant/ca/rancher.lab.ca.crt /vagrant/ca/rancher.rancher.lab.cert.crt
/vagrant/ca/rancher.rancher.lab.cert.crt: CN = rancher.rancher.lab
error 18 at 0 depth lookup:self signed certificate
OK

Proof, that the certs in the verify command are the ones used by rancher:
root@node1:~# tail -n 2 /vagrant/ca/rancher.lab.ca.crt
xykllAQlcVMxpidKViX2XCvbxnWniuJPkz9TA3ED/Tyx3uLqQjwTSuuq589ojw==
-----END CERTIFICATE-----

root@node1:~# tail -n 2 /vagrant/ca/rancher.rancher.lab.cert.crt 
uAB2+S4d2mSkqAfsakoMbKb8X1xZHLQ29hEozfd625sjr8HDDyHCtrwaqsQUrrfc
-----END CERTIFICATE-----

I still can’t make sense of what might be the issue here.

Output from check-rancher.sh:
root@node1:~# ./rancher-check.sh rancher.rancher.lab [27/1875]
OK: DNS for rancher.rancher.lab is 192.168.33.14
OK: Response from rancher.rancher.lab/ping is pong
INFO: CA checksum from rancher.rancher.lab/v3/settings/cacerts is e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
ERR: Certificate chain is not complete
INFO: Found CN rancher.rancher.lab
ERR: No Subject Alternative Name(s) (SANs) found
ERR: Certificate will not be valid in applications that dropped support for commonName (CN) matching (Chrome/Firefox amongst others)
ERR: rancher.rancher.lab was not found in SANs
Trying to get intermediates to complete chain and writing to /certs/fullchain.pem
Note: this usually only works when using certificates signed by a recognized Certificate Authority
open /certs/fullchain.pem: no such file or directory
Showing openssl s_client output
CONNECTED(00000003)
depth=0 CN = rancher.rancher.lab
verify error:num=18:self signed certificate
verify return:1
depth=0 CN = rancher.rancher.lab
verify return:1

Certificate chain
0 s:/CN=rancher.rancher.lab
i:/CN=rancher.rancher.lab

Server certificate
-----BEGIN CERTIFICATE-----
MIICzDCCAbSgAwIBAgIBAjANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNyYW5j
aGVyLnJhbmNoZXIubGFiMB4XDTE4MTAwNDE1MjQwMFoXDTE5MTAwNDE1MjIwMFow
HjEcMBoGA1UEAxMTcmFuY2hlci5yYW5jaGVyLmxhYjCCASIwDQYJKoZIhvcNAQEB
BQADggEPADCCAQoCggEBANx9lMHuha68GSA03B6DSRduYvKy7ChDOwQrWjJgaCRx
rvNg/YDLSNK9rSfGwRQdihh/iHbNxDUU6dy30cykqyQ+7BJllqT8bwP7m/bvnRtT
nVmJiuH+OkGsZ4bmieohGSK1AcP8AbYm0eO3oGeMore2YvtF2Z+qvJ0nuLr/aU0S
HkWfEumqE/lVc02xqsjwsH+ODbe7JeZmsPVG9Zc8CscRCU9LQIYNtJA+Fn1+YN8X
lGR4guz9qf50DYXzD0dN0HOnNQV82QGQJGL8fvHM4KUer9agCJ7pGIJums/0vTeE
nfgBJBG3pVoZLBUu7V+NDuTZUv+Z6cAMxZxlcvUbmUECAwEAAaMVMBMwEQYJYIZI
AYb4QgEBBAQDAgZAMA0GCSqGSIb3DQEBCwUAA4IBAQBKuvA66BcCgGI3ph2PpMA9
hBVlbx4u4R5O/Y9V2a9x5WFx8Gc7+M7tVqs/LOJAHQ97oM0gXHO0OVkS56lUQU8i
QvA0ZP/LwLQbvy3JEwZziCDKB4GVbS5GL0nP/7hsmwsDsoYl/qK8J/+nLkDWeoPS
1Sd+65VvP9iB0ZL2Uj/VlTmprCvQqeBYFetrSdV+eJYci6o5n4PSjTgiOW1yFeyw
rUpMZ36eSMxMFLREpbpcRwqoRqaHlaChuBswyBJkUANuASezXlcGEyXVDKhRSvkk
uAB2+S4d2mSkqAfsakoMbKb8X1xZHLQ29hEozfd625sjr8HDDyHCtrwaqsQUrrfc
-----END CERTIFICATE-----
subject=/CN=rancher.rancher.lab
issuer=/CN=rancher.rancher.lab

No client certificate CA names sent
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits

SSL handshake has read 1410 bytes and written 459 bytes

New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: 3D1DEA0A544E568BD689C0672BF8633D785BC9440002D55A411235CF0D24CE8B
Session-ID-ctx:
Master-Key: 3971479905FED34DA9A109019E117A499346087BE29BB064181DE63B723F5F077A3B10EC311F56C49F576C02106DB4DC
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket lifetime hint: 600 (seconds)
TLS session ticket:
0000 - 5c de 9b 52 cf 0e 7b 2c-13 3e 69 14 e3 c0 0a 25 …R…{,.>i…%
0010 - 0e 19 1c cc 51 43 c3 c0-07 cf 9b 04 0b 79 24 30 …QC…y$0
0020 - 44 82 03 17 81 24 0c ed-71 1a d4 52 0a 6d f1 cb D…..q..R.m.. 0030 - 0f b1 e2 77 89 25 56 c7-a9 07 fa 61 29 a7 ce 6b ...w.%V....a)..k 0040 - a0 49 4c 28 c8 0c 7b 74-07 8e 77 b9 1f 35 77 6a .IL(..{t..w..5wj 0050 - 12 d9 cd 14 bd bf 6d b3-d8 dd 54 6c 02 d9 01 d8 ......m...Tl.... 0060 - 0a 4c cc 1f a0 4d 91 dc-2c 8d 4f ae 1b 32 ce 68 .L...M..,.O..2.h 0070 - 49 f0 c9 d4 6f d5 a5 5f-88 fb a5 0e b2 e6 48 ad I...o.._......H. 0080 - 00 62 4d 85 2a 5e 97 5b-dc 7b 51 c1 4c 6d af 5f .bM.*^.[.{Q.Lm._ 0090 - 96 9f 81 1a 50 83 e2 fe-09 a5 76 fc cb 5b 2c ee ....P.....v..[,. 00a0 - 8b 5e 87 a3 4f 25 24 a2-f2 54 5e 2a c4 98 1a 82 .^..O%…T^*…
00b0 - ab 81 01 62 72 74 f8 33-24 aa 25 49 85 74 88 23 …brt.3$.%I.t.#

    Start Time: 1538765881
    Timeout   : 300 (sec)
    Verify return code: 18 (self signed certificate)
---
DONE
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 2 (0x2)
        Issuer: CN=rancher.rancher.lab
        Validity
            Not Before: Oct  4 15:24:00 2018 GMT
            Not After : Oct  4 15:22:00 2019 GMT
        Subject: CN=rancher.rancher.lab

Looks like I will have to check the certs. I am surprised. I generated the CA and cert for server-url using xca.

Update: I generated new certs according to the instructions here: https://gist.github.com/superseb/175476a5a1ab82df74c7037162c64946#create-self-signed-certificates.
However, the behavior of the cluster is exactly the same.

Any help is appreciated.

The methods above are still the way to check if the certificates are correct, what is the output?

hi there,

here is the info. looks the same with the new certs.

root@node1:~# kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY   STATUS             RESTARTS   AGE
cattle-system   cattle-78b54f84c5-cq2qd                   1/1     Running            1          3d
cattle-system   cattle-cluster-agent-8774bfcf-trm4t       0/1     CrashLoopBackOff   13         3d
cattle-system   cattle-node-agent-5926j                   0/1     CrashLoopBackOff   11         3d
cattle-system   cattle-node-agent-9xmwh                   1/1     Running            11         3d
cattle-system   cattle-node-agent-bzn85                   0/1     CrashLoopBackOff   10         3d
ingress-nginx   default-http-backend-797c5bc547-fprcx     1/1     Running            1          3d
ingress-nginx   nginx-ingress-controller-6gg54            1/1     Running            1          3d
ingress-nginx   nginx-ingress-controller-jpxrp            1/1     Running            1          3d
ingress-nginx   nginx-ingress-controller-kzsjr            1/1     Running            1          3d
kube-system     canal-fw8vc                               3/3     Running            3          3d
kube-system     canal-w7p6k                               3/3     Running            3          3d
kube-system     canal-xflj4                               3/3     Running            3          3d
kube-system     kube-dns-7588d5b5f5-mgsdt                 3/3     Running            3          3d
kube-system     kube-dns-autoscaler-5db9bbb766-svg96      1/1     Running            1          3d
kube-system     metrics-server-97bc649d5-f8zs5            1/1     Running            1          3d
kube-system     rke-ingress-controller-deploy-job-62psx   0/1     Completed          0          3d
kube-system     rke-kubedns-addon-deploy-job-jd9dd        0/1     Completed          0          3d
kube-system     rke-metrics-addon-deploy-job-dgztm        0/1     Completed          0          3d
kube-system     rke-network-plugin-deploy-job-tpj2h       0/1     Completed          0          3d
kube-system     rke-user-addon-deploy-job-sjxrx           0/1     Completed          0          3d


root@node1:~# kubectl logs -n cattle-system cattle-78b54f84c5-cq2qd
2018/10/11 13:37:41 [INFO] Rancher version v2.0.8 is starting
2018/10/11 13:37:41 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:auto Embedded:false KubeConfig: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false NoCACerts:false ListenConfig:<nil> AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0}
2018/10/11 13:37:41 [INFO] Listening on /tmp/log.sock
2018/10/11 13:37:41 [INFO] Activating driver rke
2018/10/11 13:37:41 [INFO] Activating driver rke done
2018/10/11 13:37:41 [INFO] Activating driver gke
2018/10/11 13:37:41 [INFO] Activating driver gke done
2018/10/11 13:37:41 [INFO] Activating driver aks
2018/10/11 13:37:41 [INFO] Activating driver aks done
2018/10/11 13:37:41 [INFO] Activating driver eks
2018/10/11 13:37:41 [INFO] Activating driver eks done
2018/10/11 13:37:41 [INFO] Activating driver import
2018/10/11 13:37:41 [INFO] Activating driver import done
I1011 13:37:41.598351       6 http.go:108] HTTP2 has been explicitly disabled
2018/10/11 13:37:41 [INFO] Starting API controllers
2018/10/11 13:37:42 [INFO] Listening on :443
2018/10/11 13:37:42 [INFO] Listening on :80
I1011 13:37:42.532897       6 leaderelection.go:175] attempting to acquire leader lease  kube-system/cattle-controllers...
I1011 13:37:42.568680       6 leaderelection.go:184] successfully acquired lease kube-system/cattle-controllers
2018/10/11 13:37:42 [INFO] Starting catalog controller
2018/10/11 13:37:42 [INFO] Starting management controllers
2018/10/11 13:37:43 [INFO] Reconciling GlobalRoles
2018/10/11 13:37:43 [INFO] Starting cluster agent forlocal
2018/10/11 13:37:43 [INFO] Reconciling RoleTemplates
2018/10/11 13:37:43 [INFO] Registering project network policy
2018/10/11 13:37:43 [INFO] Registering namespaceHandler for adding labels 
2018/10/11 13:37:43 [INFO] registering podsecuritypolicy cluster handler for cluster local
2018/10/11 13:37:43 [INFO] registering podsecuritypolicy project handler for cluster local
2018/10/11 13:37:43 [INFO] registering podsecuritypolicy namespace handler for cluster local
2018/10/11 13:37:43 [INFO] registering podsecuritypolicy serviceaccount handler for cluster local
2018/10/11 13:37:43 [INFO] registering podsecuritypolicy template handler for cluster local
2018/10/11 13:37:43 [INFO] Starting cluster controllers for local
2018/10/11 13:37:44 [INFO] Updating workload [ingress-nginx/nginx-ingress-controller] with public endpoints [[{"nodeName":"local:machine-nmdz9","addresses":["192.168.33.12"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-jpxrp","allNodes":false},{"nodeName":"local:machine-nmdz9","addresses":["192.168.33.12"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-jpxrp","allNodes":false},{"nodeName":"local:machine-xmtm8","addresses":["192.168.33.10"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-kzsjr","allNodes":false},{"nodeName":"local:machine-xmtm8","addresses":["192.168.33.10"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-kzsjr","allNodes":false},{"nodeName":"local:machine-scrtm","addresses":["192.168.33.11"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-6gg54","allNodes":false},{"nodeName":"local:machine-scrtm","addresses":["192.168.33.11"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-6gg54","allNodes":false}]]
2018/10/11 13:37:44 [INFO] Updating pod [ingress-nginx/nginx-ingress-controller-6gg54] with public endpoints [[{"nodeName":"local:machine-scrtm","addresses":["192.168.33.11"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-6gg54","allNodes":false},{"nodeName":"local:machine-scrtm","addresses":["192.168.33.11"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-6gg54","allNodes":false}]]
2018/10/11 13:37:44 [INFO] Updating pod [ingress-nginx/nginx-ingress-controller-jpxrp] with public endpoints [[{"nodeName":"local:machine-nmdz9","addresses":["192.168.33.12"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-jpxrp","allNodes":false},{"nodeName":"local:machine-nmdz9","addresses":["192.168.33.12"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-jpxrp","allNodes":false}]]
2018/10/11 13:37:44 [INFO] Updating pod [ingress-nginx/nginx-ingress-controller-kzsjr] with public endpoints [[{"nodeName":"local:machine-xmtm8","addresses":["192.168.33.10"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-kzsjr","allNodes":false},{"nodeName":"local:machine-xmtm8","addresses":["192.168.33.10"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-kzsjr","allNodes":false}]]
2018/10/11 13:37:44 [INFO] Updating workload [ingress-nginx/nginx-ingress-controller] with public endpoints [[{"nodeName":"local:machine-scrtm","addresses":["192.168.33.11"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-6gg54","allNodes":false},{"nodeName":"local:machine-scrtm","addresses":["192.168.33.11"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-6gg54","allNodes":false},{"nodeName":"local:machine-nmdz9","addresses":["192.168.33.12"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-jpxrp","allNodes":false},{"nodeName":"local:machine-nmdz9","addresses":["192.168.33.12"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-jpxrp","allNodes":false},{"nodeName":"local:machine-xmtm8","addresses":["192.168.33.10"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-kzsjr","allNodes":false},{"nodeName":"local:machine-xmtm8","addresses":["192.168.33.10"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-kzsjr","allNodes":false}]]
2018/10/11 13:37:44 [INFO] Rancher startup complete
2018/10/11 13:37:44 [INFO] Purged 1 expired tokens
time="2018-10-11 13:37:47" level=info msg="Telemetry Client v0.5.1"
time="2018-10-11 13:37:47" level=info msg="Listening on 0.0.0.0:8114"
2018/10/11 13:38:11 [INFO] Updating catalog library
2018/10/11 13:38:20 [INFO] Handling backend connection request [machine-scrtm]
2018/10/11 13:38:23 [INFO] Catalog sync done. 0 templates created, 26 templates updated, 0 templates deleted
2018/10/11 13:42:48 [INFO] 2018/10/11 13:42:48 http: TLS handshake error from 10.0.2.15:3962: tls: failed to sign ECDHE parameters: rsa: internal error
2018/10/11 13:43:17 [INFO] 2018/10/11 13:43:17 http: TLS handshake error from 10.0.2.15:4052: tls: failed to sign ECDHE parameters: rsa: internal error


root@node1:~# kubectl logs -n cattle-system cattle-cluster-agent-8774bfcf-trm4t
INFO: Environment: CATTLE_ADDRESS=10.42.1.2 CATTLE_CA_CHECKSUM=a2dab7a20ebe3fdcaf296e213d03aa133ddba317faa4deaedfdfd2daf3397456 CATTLE_CLUSTER=true CATTLE_INTERNAL_ADDRESS= CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-8774bfcf-trm4t CATTLE_SERVER=https://rancher.rancher.lab CATTLE_SERVICE_PORT=tcp://10.43.53.18:80 CATTLE_SERVICE_PORT_443_TCP=tcp://10.43.53.18:443 CATTLE_SERVICE_PORT_443_TCP_ADDR=10.43.53.18 CATTLE_SERVICE_PORT_443_TCP_PORT=443 CATTLE_SERVICE_PORT_443_TCP_PROTO=tcp CATTLE_SERVICE_PORT_80_TCP=tcp://10.43.53.18:80 CATTLE_SERVICE_PORT_80_TCP_ADDR=10.43.53.18 CATTLE_SERVICE_PORT_80_TCP_PORT=80 CATTLE_SERVICE_PORT_80_TCP_PROTO=tcp CATTLE_SERVICE_SERVICE_HOST=10.43.53.18 CATTLE_SERVICE_SERVICE_PORT=80 CATTLE_SERVICE_SERVICE_PORT_HTTP=80 CATTLE_SERVICE_SERVICE_PORT_HTTPS=443
INFO: Using resolv.conf: nameserver 10.43.0.10 search cattle-system.svc.cluster.local svc.cluster.local cluster.local example.com options ndots:5
ERROR: https://rancher.rancher.lab/ping is not accessible (Could not resolve host: rancher.rancher.lab)


root@node1:~# kubectl logs -n cattle-system cattle-node-agent-5926j
INFO: Environment: CATTLE_ADDRESS=10.0.2.15 CATTLE_AGENT_CONNECT=true CATTLE_CA_CHECKSUM=a2dab7a20ebe3fdcaf296e213d03aa133ddba317faa4deaedfdfd2daf3397456 CATTLE_CLUSTER=false CATTLE_INTERNAL_ADDRESS= CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=192.168.33.12 CATTLE_SERVER=https://rancher.rancher.lab CATTLE_SERVICE_PORT=tcp://10.43.53.18:80 CATTLE_SERVICE_PORT_443_TCP=tcp://10.43.53.18:443 CATTLE_SERVICE_PORT_443_TCP_ADDR=10.43.53.18 CATTLE_SERVICE_PORT_443_TCP_PORT=443 CATTLE_SERVICE_PORT_443_TCP_PROTO=tcp CATTLE_SERVICE_PORT_80_TCP=tcp://10.43.53.18:80 CATTLE_SERVICE_PORT_80_TCP_ADDR=10.43.53.18 CATTLE_SERVICE_PORT_80_TCP_PORT=80 CATTLE_SERVICE_PORT_80_TCP_PROTO=tcp CATTLE_SERVICE_SERVICE_HOST=10.43.53.18 CATTLE_SERVICE_SERVICE_PORT=80 CATTLE_SERVICE_SERVICE_PORT_HTTP=80 CATTLE_SERVICE_SERVICE_PORT_HTTPS=443
INFO: Using resolv.conf: nameserver 192.168.33.13 nameserver 10.0.2.3 search example.com
ERROR: https://rancher.rancher.lab/ping is not accessible (The requested URL returned error: 504)


root@node1:~# kubectl logs -n cattle-system cattle-node-agent-9xmwh
INFO: Environment: CATTLE_ADDRESS=10.0.2.15 CATTLE_AGENT_CONNECT=true CATTLE_CA_CHECKSUM=a2dab7a20ebe3fdcaf296e213d03aa133ddba317faa4deaedfdfd2daf3397456 CATTLE_CLUSTER=false CATTLE_INTERNAL_ADDRESS= CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=192.168.33.11 CATTLE_SERVER=https://rancher.rancher.lab CATTLE_SERVICE_PORT=tcp://10.43.53.18:80 CATTLE_SERVICE_PORT_443_TCP=tcp://10.43.53.18:443 CATTLE_SERVICE_PORT_443_TCP_ADDR=10.43.53.18 CATTLE_SERVICE_PORT_443_TCP_PORT=443 CATTLE_SERVICE_PORT_443_TCP_PROTO=tcp CATTLE_SERVICE_PORT_80_TCP=tcp://10.43.53.18:80 CATTLE_SERVICE_PORT_80_TCP_ADDR=10.43.53.18 CATTLE_SERVICE_PORT_80_TCP_PORT=80 CATTLE_SERVICE_PORT_80_TCP_PROTO=tcp CATTLE_SERVICE_SERVICE_HOST=10.43.53.18 CATTLE_SERVICE_SERVICE_PORT=80 CATTLE_SERVICE_SERVICE_PORT_HTTP=80 CATTLE_SERVICE_SERVICE_PORT_HTTPS=443
INFO: Using resolv.conf: nameserver 192.168.33.13 nameserver 10.0.2.3 search example.com
INFO: https://rancher.rancher.lab/ping is accessible
INFO: Value from https://rancher.rancher.lab/v3/settings/cacerts is an x509 certificate
time="2018-10-11T13:38:19Z" level=info msg="Rancher agent version v2.0.8 is starting"
time="2018-10-11T13:38:19Z" level=info msg="Option customConfig=map[address:10.0.2.15 internalAddress: roles:[] label:map[]]"
time="2018-10-11T13:38:19Z" level=info msg="Listening on /tmp/log.sock"
time="2018-10-11T13:38:19Z" level=info msg="Option etcd=false"
time="2018-10-11T13:38:19Z" level=info msg="Option controlPlane=false"
time="2018-10-11T13:38:19Z" level=info msg="Option worker=false"
time="2018-10-11T13:38:19Z" level=info msg="Option requestedHostname=192.168.33.11"
time="2018-10-11T13:38:19Z" level=info msg="Connecting to wss://rancher.rancher.lab/v3/connect with token 46nqvk58p6lbqfmd57chfhsfr2g2rt95wkfcw7vtxq85lk4bld4ljn"
time="2018-10-11T13:38:19Z" level=info msg="Connecting to proxy" url="wss://rancher.rancher.lab/v3/connect"
time="2018-10-11T13:38:49Z" level=info msg="Error while getting agent config: invalid response 504: <html>\r\n<head><title>504 Gateway Time-out</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>504 Gateway Time-out</h1></center>\r\n<hr><center>nginx/1.13.12</center>\r\n</body>\r\n</html>\r\n"
time="2018-10-11T13:39:24Z" level=info msg="Error while getting agent config: invalid response 504: <html>\r\n<head><title>504 Gateway Time-out</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>504 Gateway Time-out</h1></center>\r\n<hr><center>nginx/1.13.12</center>\r\n</body>\r\n</html>\r\n"
time="2018-10-11T13:39:59Z" level=info msg="Error while getting agent config: invalid response 504: <html>\r\n<head><title>504 Gateway Time-out</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>504 Gateway Time-out</h1></center>\r\n<hr><center>nginx/1.13.12</center>\r\n</body>\r\n</html>\r\n"
time="2018-10-11T13:40:34Z" level=info msg="Error while getting agent config: invalid response 504: <html>\r\n<head><title>504 Gateway Time-out</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>504 Gateway Time-out</h1></center>\r\n<hr><center>nginx/1.13.12</center>\r\n</body>\r\n</html>\r\n"
time="2018-10-11T13:41:09Z" level=info msg="Error while getting agent config: invalid response 504: <html>\r\n<head><title>504 Gateway Time-out</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>504 Gateway Time-out</h1></center>\r\n<hr><center>nginx/1.13.12</center>\r\n</body>\r\n</html>\r\n"
time="2018-10-11T13:41:44Z" level=info msg="Error while getting agent config: invalid response 504: <html>\r\n<head><title>504 Gateway Time-out</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>504 Gateway Time-out</h1></center>\r\n<hr><center>nginx/1.13.12</center>\r\n</body>\r\n</html>\r\n"
time="2018-10-11T13:42:19Z" level=info msg="Error while getting agent config: invalid response 504: <html>\r\n<head><title>504 Gateway Time-out</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>504 Gateway Time-out</h1></center>\r\n<hr><center>nginx/1.13.12</center>\r\n</body>\r\n</html>\r\n"
time="2018-10-11T13:42:54Z" level=info msg="Error while getting agent config: invalid response 504: <html>\r\n<head><title>504 Gateway Time-out</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>504 Gateway Time-out</h1></center>\r\n<hr><center>nginx/1.13.12</center>\r\n</body>\r\n</html>\r\n"
time="2018-10-11T13:43:29Z" level=info msg="Error while getting agent config: invalid response 504: <html>\r\n<head><title>504 Gateway Time-out</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>504 Gateway Time-out</h1></center>\r\n<hr><center>nginx/1.13.12</center>\r\n</body>\r\n</html>\r\n"
time="2018-10-11T13:44:04Z" level=info msg="Error while getting agent config: invalid response 504: <html>\r\n<head><title>504 Gateway Time-out</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>504 Gateway Time-out</h1></center>\r\n<hr><center>nginx/1.13.12</center>\r\n</body>\r\n</html>\r\n"
time="2018-10-11T13:44:39Z" level=info msg="Error while getting agent config: invalid response 504: <html>\r\n<head><title>504 Gateway Time-out</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>504 Gateway Time-out</h1></center>\r\n<hr><center>nginx/1.13.12</center>\r\n</body>\r\n</html>\r\n"
time="2018-10-11T13:45:14Z" level=info msg="Error while getting agent config: invalid response 504: <html>\r\n<head><title>504 Gateway Time-out</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>504 Gateway Time-out</h1></center>\r\n<hr><center>nginx/1.13.12</center>\r\n</body>\r\n</html>\r\n"

root@node1:~# kubectl logs -n cattle-system cattle-node-agent-bzn85
INFO: Environment: CATTLE_ADDRESS=10.0.2.15 CATTLE_AGENT_CONNECT=true CATTLE_CA_CHECKSUM=a2dab7a20ebe3fdcaf296e213d03aa133ddba317faa4deaedfdfd2daf3397456 CATTLE_CLUSTER=false CATTLE_INTERNAL_ADDRESS= CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=192.168.33.10 CATTLE_SERVER=https://rancher.rancher.lab CATTLE_SERVICE_PORT=tcp://10.43.53.18:80 CATTLE_SERVICE_PORT_443_TCP=tcp://10.43.53.18:443 CATTLE_SERVICE_PORT_443_TCP_ADDR=10.43.53.18 CATTLE_SERVICE_PORT_443_TCP_PORT=443 CATTLE_SERVICE_PORT_443_TCP_PROTO=tcp CATTLE_SERVICE_PORT_80_TCP=tcp://10.43.53.18:80 CATTLE_SERVICE_PORT_80_TCP_ADDR=10.43.53.18 CATTLE_SERVICE_PORT_80_TCP_PORT=80 CATTLE_SERVICE_PORT_80_TCP_PROTO=tcp CATTLE_SERVICE_SERVICE_HOST=10.43.53.18 CATTLE_SERVICE_SERVICE_PORT=80 CATTLE_SERVICE_SERVICE_PORT_HTTP=80 CATTLE_SERVICE_SERVICE_PORT_HTTPS=443
INFO: Using resolv.conf: nameserver 192.168.33.13 nameserver 10.0.2.3 search example.com
ERROR: https://rancher.rancher.lab/ping is not accessible (The requested URL returned error: 504)


root@node1:~# kubectl logs -n cattle-system cattle-node-agent-bzn85                                                                                                                                                                                   
INFO: Environment: CATTLE_ADDRESS=10.0.2.15 CATTLE_AGENT_CONNECT=true CATTLE_CA_CHECKSUM=a2dab7a20ebe3fdcaf296e213d03aa133ddba317faa4deaedfdfd2daf3397456 CATTLE_CLUSTER=false CATTLE_INTERNAL_ADDRESS= CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=192.1
68.33.10 CATTLE_SERVER=https://rancher.rancher.lab CATTLE_SERVICE_PORT=tcp://10.43.53.18:80 CATTLE_SERVICE_PORT_443_TCP=tcp://10.43.53.18:443 CATTLE_SERVICE_PORT_443_TCP_ADDR=10.43.53.18 CATTLE_SERVICE_PORT_443_TCP_PORT=443 CATTLE_SERVICE_PORT_44
3_TCP_PROTO=tcp CATTLE_SERVICE_PORT_80_TCP=tcp://10.43.53.18:80 CATTLE_SERVICE_PORT_80_TCP_ADDR=10.43.53.18 CATTLE_SERVICE_PORT_80_TCP_PORT=80 CATTLE_SERVICE_PORT_80_TCP_PROTO=tcp CATTLE_SERVICE_SERVICE_HOST=10.43.53.18 CATTLE_SERVICE_SERVICE_POR
T=80 CATTLE_SERVICE_SERVICE_PORT_HTTP=80 CATTLE_SERVICE_SERVICE_PORT_HTTPS=443                                                                                                                                                                        
INFO: Using resolv.conf: nameserver 192.168.33.13 nameserver 10.0.2.3 search example.com                                                                                                                                                                   
ERROR: https://rancher.rancher.lab/ping is not accessible (The requested URL returned error: 504)
root@node1:~# openssl s_client -connect rancher.rancher.lab:443 -servername rancher.rancher.lab
CONNECTED(00000003)    
depth=1 C = CH, ST = Zug, O = Rancher Lab, CN = Rancher Lab Intermediate CA  
verify error:num=20:unable to get local issuer certificate                   
---                                                                          
Certificate chain                                                            
 0 s:/C=XXXX/ST=XXXX/L=xxxx/O=Rancher Lab/CN=rancher.rancher.lab                 
   i:/C=XXXX/ST=XXXX/O=Rancher Lab/CN=Rancher Lab Intermediate CA               
 1 s:/C=XXXX/ST=XXXX/O=Rancher Lab/CN=Rancher Lab Intermediate CA               
   i:/C=XXXX/ST=XXXX/L=xxxx/O=Rancher Lab/CN=Rancher Lab Root CA                 
---                                                                          
Server certificate                                                           
-----BEGIN CERTIFICATE-----                                                  
MIIFVjCCAz6gAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwVzELMAkGA1UEBhMCQ0gx             
DDAKBgNVBAgMA1p1ZzEUMBIGA1UECgwLUmFuY2hlciBMYWIxJDAiBgNVBAMMG1Jh
bmNoZXIgTGFiIEludGVybWVkaWF0ZSBDQTAeFw0xODEwMDgwNzM0MjZaFw0xOTEw
MTgwNzM0MjZaMF0xCzAJBgNVBAYTAkNIMQwwCgYDVQQIDANadWcxDDAKBgNVBAcM
A1p1ZzEUMBIGA1UECgwLUmFuY2hlciBMYWIxHDAaBgNVBAMME3JhbmNoZXIucmFu   
Y2hlci5sYWIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC/d2PwWsbi
7ZEZ/SBE1cO1jqOPKbHhZ/5tua1bdlgHUTVJtvb3w6/JLDy/7djIkcgn7sPPu/Yo
AHJEr4rfOUqJJIuBwT0UbjRREQxxm4P42ujri2HCmTlTmR2FA0/1P2KTjMkGXbgl
SjdPoFyw24K9zzimcRdHwggfqj07rUVcGw6LVH6Y82wif3yjz3kml6zvVEm6OD/6
MIxdyq17hhpSRvKHpQY+mfXN8Jcn9/+iEiFxgzpkSmkCsRrU3lobIv4xUFsYUTXO
CpAHK4k/mMmF8BiM1rfOzk1WX0+8I765G2tQzsGPoL1VcHESF7fqP2tuZjxBYiof
8sUiwE54Bb8zAgMBAAGjggEkMIIBIDAJBgNVHRMEAjAAMBEGCWCGSAGG+EIBAQQE
AwIGQDAzBglghkgBhvhCAQ0EJhYkT3BlblNTTCBHZW5lcmF0ZWQgU2VydmVyIENl
cnRpZmljYXRlMB0GA1UdDgQWBBQWKyLCFO0NVgIlSE78ZGN3zj296DCBhgYDVR0j
BH8wfYAU433FxMzFBGbFRW/EKKeokwelnZChYaRfMF0xCzAJBgNVBAYTAkNIMQww
CgYDVQQIDANadWcxDDAKBgNVBAcMA1p1ZzEUMBIGA1UECgwLUmFuY2hlciBMYWIx
HDAaBgNVBAMME1JhbmNoZXIgTGFiIFJvb3QgQ0GCAhAAMA4GA1UdDwEB/wQEAwIF
oDATBgNVHSUEDDAKBggrBgEFBQcDATANBgkqhkiG9w0BAQsFAAOCAgEAjBbA8l86
MXg66xtf768ZI2d2WAkgVF4Hh+3LjsNlKtyl5UnyWc+gBrW5lWstv2nXOCJIh1Xu
7l9vFZaYT0TuYBxx9QO/LAdWGq7dd9GtRBloFc+2b/d9nmll08podgZrRMXbYhv6
aGP/6KTXrIrd/apyC11CLgHAhY/qKR6m9FbNmLaupZeLKk3bqT9mSq2VmYP2gf5q
LAqoMKhi7htpIHGS4ZEBGVE0EFBLatMqBCryeso+sUs0OI4m3OosX/lPx46pmS3J
iyFuw30T7b5qDyU7Moexample.com
-----END CERTIFICATE-----
subject=/C=XXXX/ST=XXXX/L=xxxx/O=Rancher Lab/CN=rancher.rancher.lab
issuer=/C=XXXX/ST=XXXX/O=Rancher Lab/CN=Rancher Lab Intermediate CA
---
No client certificate CA names sent
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 3497 bytes and written 459 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: CC00B9F39172FFC7739A9322ADD4E22CDDA3D22C421A06D6BD6CC03DFFBBE78D
    Session-ID-ctx:
    Master-Key: D5F868D8D270A08D029BCA1CA37812427A2D212F2EB3C9A5C2BE044CFDA7052AF34FDA3F9DFEEE3BB9F85C3BA7F42E38
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 600 (seconds)
    TLS session ticket:
    0000 - 90 6f 48 03 55 5a c8 14-a9 99 1b d5 ee 74 fd 93   .oH.UZ.......t..
    0010 - 6c 34 78 ed 80 49 4e 8d-a1 6d 71 c4 2c 0f dd b4   l4x..IN..mq.,...
    0020 - 7d 97 5f 16 dc bc 4c 63-29 a9 fa a7 66 b1 e2 ec   }._...Lc)...f...
    0030 - a6 e2 bf 7c b6 7f 15 84-13 7b b0 41 38 aa 2f 3a   ...|.....{.A8./:
    0040 - 03 41 60 53 8c 30 97 e6-82 f5 28 ef 88 66 b9 7c   .A`S.0....(..f.|
    0050 - fb e1 2d d4 05 de 13 26-f2 7c 9c c3 2b 9e 35 a1   ..-....&.|..+.5.
    0060 - 34 20 86 97 48 57 f9 3a-5d 6e 89 94 eb ff 12 7f   4 ..HW.:]n......
    0070 - c9 df 49 9a a6 c8 74 cd-14 83 ca d3 c3 4d b4 0a   ..I...t......M..
    0080 - 55 fb e1 f0 09 40 83 9c-4d 22 95 85 d9 6f 0b ab   U....@..M"...o..
    0090 - cb f3 0a 1c 36 c3 61 9e-c2 e7 23 98 e7 20 f0 cb   ....6.a...#.. ..
    00a0 - 6d 48 6e b7 81 3a b6 3d-17 e5 29 d7 be 24 a0 53   mHn..:.=..)..$.S
    00b0 - cd c6 da 48 dd f5 3c 55-b0 ce f5 c0 f4 f9 ac 85   ...H..<U........

    Start Time: 1539265595
    Timeout   : 300 (sec)
    Verify return code: 20 (unable to get local issuer certificate)
---
^C

root@node1:~# ./rancher-check.sh rancher.rancher.lab
OK: DNS for rancher.rancher.lab is 192.168.33.14
OK: Response from rancher.rancher.lab/ping is pong
INFO: CA checksum from rancher.rancher.lab/v3/settings/cacerts is e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
ERR: Certificate chain is not complete     
INFO: Found CN rancher.rancher.lab                                              
ERR: No Subject Alternative Name(s) (SANs) found
ERR: Certificate will not be valid in applications that dropped support for commonName (CN) matching (Chrome/Firefox amongst others)
ERR: rancher.rancher.lab was not found in SANs
Trying to get intermediates to complete chain and writing to /certs/fullchain.pem
Note: this usually only works when using certificates signed by a recognized Certificate Authority
open /certs/fullchain.pem: no such file or directory
Showing openssl s_client output                    
CONNECTED(00000003)    
depth=1 C = CH, ST = Zug, O = Rancher Lab, CN = Rancher Lab Intermediate CA  
verify error:num=20:unable to get local issuer certificate                   
---                                                                          
Certificate chain                                                            
 0 s:/C=xxxxx/ST=xxxxx/L=xxxx/O=Rancher Lab/CN=rancher.rancher.lab                 
   i:/C=xxxxx/ST=xxxxx/O=Rancher Lab/CN=Rancher Lab Intermediate CA               
 1 s:/C=xxxxx/ST=xxxxx/O=Rancher Lab/CN=Rancher Lab Intermediate CA               
   i:/C=xxxxx/ST=xxxxx/L=xxxx/O=Rancher Lab/CN=Rancher Lab Root CA                 
---                                                                          
Server certificate                                                           
-----BEGIN CERTIFICATE-----                
MIIFVjCCAz6gAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwVzELMAkGA1UEBhMCQ0gx             
DDAKBgNVBAgMA1p1ZzEUMBIGA1UECgwLUmFuY2hlciBMYWIxJDAiBgNVBAMMG1Jh
bmNoZXIgTGFiIEludGVybWVkaWF0ZSBDQTAeFw0xODEwMDgwNzM0MjZaFw0xOTEw
MTgwNzM0MjZaMF0xCzAJBgNVBAYTAkNIMQwwCgYDVQQIDANadWcxDDAKBgNVBAcM
A1p1ZzEUMBIGA1UECgwLUmFuY2hlciBMYWIxHDAaBgNVBAMME3JhbmNoZXIucmFu   
Y2hlci5sYWIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC/d2PwWsbi
7ZEZ/SBE1cO1jqOPKbHhZ/5tua1bdlgHUTVJtvb3w6/JLDy/7djIkcgn7sPPu/Yo
AHJEr4rfOUqJJIuBwT0UbjRREQxxm4P42ujri2HCmTlTmR2FA0/1P2KTjMkGXbgl
SjdPoFyw24K9zzimcRdHwggfqj07rUVcGw6LVH6Y82wif3yjz3kml6zvVEm6OD/6
MIxdyq17hhpSRvKHpQY+mfXN8Jcn9/+iEiFxgzpkSmkCsRrU3lobIv4xUFsYUTXO
CpAHK4k/mMmF8BiM1rfOzk1WX0+8I765G2tQzsGPoL1VcHESF7fqP2tuZjxBYiof
8sUiwE54Bb8zAgMBAAGjggEkMIIBIDAJBgNVHRMEAjAAMBEGCWCGSAGG+EIBAQQE           
AwIGQDAzBglghkgBhvhCAQ0EJhYkT3BlblNTTCBHZW5lcmF0ZWQgU2VydmVyIENl
cnRpZmljYXRlMB0GA1UdDgQWBBQWKyLCFO0NVgIlSE78ZGN3zj296DCBhgYDVR0j
BH8wfYAU433FxMzFBGbFRW/EKKeokwelnZChYaRfMF0xCzAJBgNVBAYTAkNIMQww
CgYDVQQIDANadWcxDDAKBgNVBAcMA1p1ZzEUMBIGA1UECgwLUmFuY2hlciBMYWIx           
HDAaBgNVBAMME1JhbmNoZXIgTGFiIFJvb3QgQ0GCAhAAMA4GA1UdDwEB/wQEAwIF
oDATBgNVHSUEDDAKBggrBgEFBQcDATANBgkqhkiG9w0BAQsFAAOCAgEAjBbA8l86
MXg66xtf768ZI2d2WAkgVF4Hh+3LjsNlKtyl5UnyWc+gBrW5lWstv2nXOCJIh1Xu
7l9vFZaYT0TuYBxx9QO/LAdWGq7dd9GtRBloFc+2b/d9nmll08podgZrRMXbYhv6
aGP/6KTXrIrd/apyC11CLgHAhY/qKR6m9FbNmLaupZeLKk3bqT9mSq2VmYP2gf5q
LAqoMKhi7htpIHGS4ZEBGVE0EFBLatMqBCryeso+sUs0OI4m3OosX/lPx46pmS3J
iyFuw30T7b5qDyU7Moexample.com
-----END CERTIFICATE-----
subject=/C=xxxxx/ST=xxxxx/L=xxxx/O=Rancher Lab/CN=rancher.rancher.lab
issuer=/C=xxxxx/ST=xxxxx/O=Rancher Lab/CN=Rancher Lab Intermediate CA
---
No client certificate CA names sent
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 3497 bytes and written 459 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: E114DE78362C31BFDE1CF65AF8770816CE5DBF8A8CA1151D03BC12E6C3B0ADB5
    Session-ID-ctx:
    Master-Key: 1FEEC6E31471D14B4858C13990DB961E3EAB2674E18D1E3DAEB57B5519FBAB4B4A71EE3E7F17CFE95989B5BAA3212C2F
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 600 (seconds)
    TLS session ticket:
    0000 - 2b e0 5a 5a 75 b2 7b e8-7d f0 05 a0 a6 3c 0d 74   +.ZZu.{.}....<.t
    0010 - 8d a9 36 34 85 18 40 0c-c2 ba 0f 59 6e 96 e6 62   ..64..@....Yn..b
    0020 - a6 e9 51 2a 36 c7 1b 3b-cf 0b 79 5f 8a 0c 3c f1   ..Q*6..;..y_..<.
    0030 - 0e 99 3f 99 b8 44 8d b8-70 f8 95 9d f3 cd 71 71   ..?..D..p.....qq
    0040 - db bc 81 e2 e4 53 b4 ee-29 7d d5 67 97 88 8f 66   .....S..)}.g...f
    0050 - 76 01 9e 8a fc bf ee 3b-4e 36 82 b7 8e f5 cb a3   v......;N6......
    0060 - 1b 5a 13 13 02 aa 9e de-1b a6 06 71 fb 55 e4 30   .Z.........q.U.0
    0070 - 03 52 0b 2a c0 3e ae 23-a6 39 19 a7 ef 30 09 e4   .R.*.>.#.9...0..
    0080 - 2f 3f 98 27 a8 dc e5 8c-ee 7f 0e d0 8c 60 aa e7   /?.'.........`..
    0090 - e1 57 cd 01 f2 eb 97 97-9d 39 26 32 7c f6 e1 e2   .W.......9&2|...
    00a0 - df 4d 13 6c 91 8c bd a3-62 19 82 65 ce 2e 7f a8   .M.l....b..e....
    00b0 - 19 9e 2d 8a d1 f4 42 53-f4 a3 5d f6 ed 3c 11 2f   ..-...BS..]..<./

    Start Time: 1539265840
    Timeout   : 300 (sec)
    Verify return code: 20 (unable to get local issuer certificate)
---
DONE
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 4096 (0x1000)
        Issuer: C=xxxxx, ST=xxxxx, O=Rancher Lab, CN=Rancher Lab Intermediate CA
        Validity
            Not Before: Oct  8 07:34:26 2018 GMT
            Not After : Oct 18 07:34:26 2019 GMT
        Subject: C=xxxxx, ST=xxxxx, L=xxxx, O=Rancher Lab, CN=rancher.rancher.lab

Is it a problem that the x509 subject alternative name is missing (ERR: No Subject Alternative Name(s) (SANs) found)? The cert was generated using the script at https://gist.github.com/superseb/175476a5a1ab82df74c7037162c64946#create-self-signed-certificates.

I had to execute the rancher-check.sh script multiple times to get a PONG-response. Most times, it failed:

root@node1:~# ./rancher-check.sh rancher.rancher.lab
OK: DNS for rancher.rancher.lab is 192.168.33.14
ERR: Response from rancher.rancher.lab/ping is not pong:
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   194  100   194    0     0  19134      0 --:--:-- --:--:-- --:--:-- 21555
HTTP/1.1 301 Moved Permanently
Server: nginx/1.14.0 (Ubuntu)
Date: Thu, 11 Oct 2018 13:58:06 GMT
Content-Type: text/html
Content-Length: 194
Connection: keep-alive
Location: https://rancher.rancher.lab/ping

<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.14.0 (Ubuntu)</center>
</body>
</html>

I was finally able to solve the problem.

This is my rancher-cluster.yml:

nodes:
  - address: node1.rancher.lab
    user: root
    role: [controlplane,worker,etcd]
    ssh_key_path: ssh_key
  - address: node2.rancher.lab
    user: root
    role: [controlplane,worker,etcd]
    ssh_key_path: ssh_key
  - address: node3.rancher.lab
    user: root
    role: [controlplane,worker,etcd]
    ssh_key_path: ssh_key

When I deployed the cluster, I ended up having the problems described above.
Turns out that rke assumes, that the IP or domain name configured for value address refers to the default network interface of the respective node. This is important, because it (k8s?) assumes that it is binding to the network interface with the default route.

This was not true in my scenario… maybe something that should be added to the rke documentation?

I don’t know k8s very well, so I cannot tell, why it is important to bind it to an IP address to the network with the default route.

I was able to reproduce and solve the problem on both rancher 2.0.8 and 2.1.0.

Thanks for all the help.

1 Like

@random I am in the same situation. However, I am a network/software newbie :woozy_face: Could you please let me know what did you do to fix this?

@koshur sure. Referring to my example from my previous post, simply make sure that
node1.rancher.lab, node2.rancher.lab, etc. always resolve to an IP address that is configured on the nodes as default network with default gw set.