We are running Rancher 2 and have kubernetes deployed on vSphere vms.
We have a service (two nginx hosts) that have data that our tomcat pods needs to connect and download that we want to happen over https. So we have a service config.svc.cluster.local that these hosts connect to. The problem is how do we get a good certificate on the nginx hosts?
I see that kubernetes has a ca cert available but When I generate a csr and approve this there is no cert issued so that means the manager can’t get to the cert, right? I do not want to do this on a ingress cert, this traffic should not leave each cluster.
In the kubernetes.io docs I found this:
To enable it, pass the
--cluster-signing-key-file parameters to the controller manager with paths to your Certificate Authority’s keypair.
how is that done?
Then I thought of just doing config.companyname.com since we have this cert but how do I add CNAMEs in kube-dns?
Correct solution is to use configmaps but we are migrating apps so need to get this working correctly first.