Rancher health check fails only in AWS environment

I’m running rancher v1.4.2 located on the east coast against an AWS docker 1.12.6 host in singapore.
When I deploy the stack , my containers seem to work but when the I launch rancher/lb-service-haproxy:v0.5.9 as part of the stack the load balancer won’t start. The error I seem to be getting is:

Failed to create Rancher client Get http://myrancher01:7001/v2-beta: dial tcp: lookup myrancher01 on 169.254.169.250:53: cannot unmarshal DNS message

but the dns entry myrancher01 does exist

Any ideas?

This sounds like an issue with builtin vs cgo DNS resolution, which is likely fixed in a newer version.

thanks for replying. I just tried launching rancher1.6.5 and connected a docker host running RancherOS v1.0.3 and I’m getting a similar error: Is there something I can do to the loadbalancer to correct the dns?

7/27/2017 5:13:37 PMtime=“2017-07-27T21:13:37Z” level=fatal msg="Failed to create Rancher client Get http://myrancher01:7001/v2-beta: dial tcp: lookup myrancher01 on 169.254.169.250:53: cannot unmarshal DNS message"
7/27/2017 5:13:50 PMtime=“2017-07-27T21:13:50Z” level=error msg="Failed to initialize Kubernetes controller: KUBERNETES_URL is not set"
7/27/2017 5:13:50 PMtime=“2017-07-27T21:13:50Z” level=fatal msg=“Failed to create Rancher client Get http://myrancher01:7001/v2-beta: dial tcp: lookup myrancher01 on 169.254.169.250:53: cannot unmarshal DNS message”

Is myrancher01 the actual hostname? or is it something.local?

It’s actually a fully qualified domain name. I just made it more generic. Its myrancher01.location.com

For anyone else hitting this. I had to switch my IAM role to a specialized RancherRole and then everything started working.