Enabling https on Rancher

I was trying to enable https for rancher server. Though its successful with a self signed certificate with the help of tutorial https://docs.rancher.com/rancher/v1.4/en/installing-rancher/installing-server/basic-ssl-config/ , its failing while adding the hosts

Since I have been using the https://rancher-server to access it , obviously I have to edit the hosts file of my localmachine to map the IP for rancher-server to acces it. But while adding the hosts, I have done the same by editing /etc/hosts file of the host vm and executing the command in the document. However, while running host adding command, the rancher agent container created firstly ended up in below error

INFO: Running Agent Registration Process, CATTLE_URL=https://rancher-server/v1
INFO: Attempting to connect to: https://rancher-server/v1
ERROR: https://rancher-server/v1 is not accessible

So I had to get into the rancher agent container and update the host entry for rancher-server there too. It then launches another set of containers including another rancher agent plus set of containers from the environmental templates like ipsec, health check etc. But I cant do the same workaround in it as those are stopping instantly. Its not a good workaround either. Do any have suggestions for using private name for rancher server which is not published in internet? We dont have any private nameservers as well. Please help


Any idea? Atleast , is there anyway to insert some “IP hostname” to /etc/hosts of the containers without losing the current data references to localhost, while creating the container

why not using a reverser proxy??

I use Nginx:

upstream rancher {

server {
listen 443 ssl;
server_name prancher.example.com;
ssl_certificate /etc/nginx/certs/wildcard.crt;
ssl_certificate_key /etc/nginx/certs/wildcard.key;

location / {
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Forwarded-Port $server_port;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass http://rancher;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "Upgrade";
    # This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
    proxy_read_timeout 900s;
    add_header Strict-Transport-Security "max-age=31536000";


server {
listen 80;
server_name prancher.example.com;
return 301 https://$server_name$request_uri;

1 Like

Because of infrastructure stack services needing to resolve the hostname of the rancher server independent of its hosts (due to isolation) I don’t think this possible. We tried to go “DNSless” as well but its just not possible at the moment.
Its possible to use docker’s add host option with docker run for the agent, but no simple way to do this for each infra service. This is not related to TLS hosting itself either. Unfortunately, until we find out a method otherwise the rancher server hostname will need to be resolved by DNS by any container.

so the only way is to run your own DNS resolver servers.

If every host machine points to the local DNS resolver, then you have control over what is returned on any DNS query to resolve :slight_smile:

This seems like the answer to the question, run your own DNS resolvers, maybe CoreDNS is the way to go - seems like the simplest as just setup the config file with relevant custom results then everything else gets forwarded somewhere else to get resolved or do root queries. as it is middle-ware pluggable, this custom config could be pulled from anywhere, perhaps Redis? Then as part of building a host CoreDNS is one of the services that is started and DNS resolving is handled locally.

Personally i’d just use Route 53.

Yeah, makes sense for real FQDN - however OP is using made up domains, so doesn’t seem like that would work?

Yep thats right…

I went with DNS server way.

Thanks everyone for their suggestions