I’m trying to stand up a 1 node server in a test environment following the HA guide thinking that if I can get this working I can easily add nodes to the cluster later if needed. During the install I’m attempting to use a cert from an internal Windows Server 2016 CA to mimick what would happen in production. I’m setting the cert values via the TLS secrets as described but keep getting the following error:
unexpected error generating SSL certificate with full intermediate chain CA certs: Get ldap:///{Long CA Path}: unsupported protocol scheme "ldap"
I did add the CA cert to the tls-ca secret and also tried adding the CA cert into the Rancher cert for a complete chain. I did see this K8S issue outlining the same problem that was merged into the K8S ingress-nginx a year ago. Since the Rancher deployment is using their own internal fork of this I’m not sure if this patch has been applied there or not. I’m also not savvy enough to do this on my own yet as I’m still learning the K8S and Rancher environments. Any help would be appreciated.
The issue is that it is trying to auto-complete the chain by default (--enable-ssl-chain-completion=true) and it can’t handle the ldap:// AIA set in your certificate. The referenced issue has the same certificate info, but is trying to disable the chain completion by setting --enable-ssl-chain-completion=false, which isn’t picked up. The fix for the issue was to honor that setting properly.
Can you try creating a cluster with the following config added to cluster.yml:
Thanks Sebastiaan, that seems to have done the trick on the certificate issue. Unfortunately now I have another problem, same one referenced in this isse. I’m showing 3 pods running and ready, but checking the logs I get the following on one of the pods:
[ERROR] ClusterController local [cluster-deploy]failed with : waiting for server-url setting to be set
Thanks for continuing to help. For some reason I’m not getting an http response. Everything I check on the K8S side seems okay and the waiting for server url is the only error I’m finding on the logs. All pods are up and ready and there’s 3 replicas of the Rancher service runnnig. Not sure where to check next.
Basically it comes down to check the created ingress, and that hostname should point to either the load balancer in front of your nodes or to the single node you have in the cluster. If that’s the case, you can check the ingress logging to see what it logs when you try to access it.
Helps if I actually open the ports on the firewall.
I don’t remember having to do that with the container version, who knows.
Cluster is up and going now, thanks again for your help.