I am trying to startup a HA cluster for rancher, but it seems my configuration is experiencing significant errors.
At the moment, the HA installation documentation is a little lacking in verbosity of what is exactly needed in the way of a load balancer. I am deploying via DigitalOcean droplets, and have a set of load balancers attached pointing to my three rancher servers. Should the Host Registration URL be the FQDN of this load balancer? Do I need to load SSL certs on the load balancer or are the self-signed certs used for this purpose? I have seen several mentions of needing to enable the proxy protocol for AWS ELB, any guidance on HAProxy?
My most common errors I am encountering are:
-
"Container agent is not running in state &types.ContainerState{Status:\"exited\", Running:false, Paused:false, Restarting:false, OOMKilled:false, Dead:false, Pid:0, ExitCode:0, Error:\"\", StartedAt:\"2016-05-03T03:30:47.529858519Z\", FinishedAt:\"2016-05-03T03:31:02.052640391Z\"}" component=docker
-
Failed to read project: Unsupported config option for rancher-compose-executor service: 'health_check'\nUnsupported config option for go-machine-service service: 'health_check'\nUnsupported config option for websocket-proxy service: 'health_check'\nUnsupported config option for websocket-proxy-ssl service: 'health_check'\nUnsupported config option for cattle service: 'health_check'
-
Could not parse config for project management : Unsupported config option for cattle service: 'health_check'\nUnsupported config option for go-machine-service service: 'health_check'\nUnsupported config option for websocket-proxy-ssl service: 'health_check'\nUnsupported config option for rancher-compose-executor service: 'health_check'\nUnsupported config option for websocket-proxy service: 'health_check'
If I can get past these questions/errors, I think it should work. Port 18080 comes up even with the errors, but the project/cluster never comes online on port 80.