Route53 Service only syncs certain services?

Currently, the Route53 service is only syncing some but not all of my services (running Rancher 0.50.2 upgraded from 0.47).

The logs show nothing that looks off to me, more like something is happening that is maybe undocumented? Some services are being assigned FQDNs while others are not.

Any thoughts as to the cause of this? I thought it was that the service was only putting names of services up when the services exposed ports at first, but it didn’t look like that was the case.

My goal is to use a VPN to be able to quickly connect to the Rancher managed network + do things like directly interact with a non-public database. I have the VPN working as per http://rancher.com/building-a-continuous-integration-environment-using-docker-jenkins-and-openvpn/ without the Jenkins part but now I need name resolution. Since the Rancher DNS only resolves services that have links to the container, that isn’t a solution either. I was hoping the Route53 code would expose every service (even better is every service and every container as a separate name).

iirc rancher only generates DNS entries if a container exposes a port on the host

Maybe we should open a bug on the docs then? That wasn’t made clear unless I miss-read something. Also, had a service that didn’t have ports exposed got published as well (which also might be a bug?).

I suspect my use case isn’t going to be well supported for now / may end up writing my own DNS service for this purposes but it would be really nice to be able to casually interact with containers ‘behind the firewall’ of the host systems.

Scratch that … somehow didn’t read the first line of the docs section … from http://docs.rancher.com/rancher/rancher-services/dns-service/#using-route53-service

The route53 service will generate DNS records for only services that have ports published to the host.

Again, would be nice to be able to resolve any container’s name. Unless the future plan is to add some form of firewall between services that aren’t linked. It would be nice if the built in rancher dns service would just resolve any arbitrary container/service’s name (e.g. <container>.<service>.<stack>.<env> or something like that).

I tried to explain something similar: Rancher external DNS docker ip instead of rancher host

If you have direct access to the rancher managed network why not use container ip’s in the dns.

@jschilperoord Did a quick hack to make external-dns set internal IPs for all containers. Branched external-dns at https://github.com/bloomapi/external-dns.

You can build the container with normal docker build . after checking it out

Deploy it with the same ENV vars as are created by the catalog entries. e.g. here’s a docker-compose file:

route53:
  image: <your private repo that you docker push to ... or using 'build' instead>
  expose:
   - 1000
  environment:
    AWS_ACCESS_KEY: <key>
    AWS_SECRET_KEY: <secret>
    AWS_REGION: us-east-1
    ROOT_DOMAIN: <your domain>
    TTL: 300
  labels:
    io.rancher.container.create_agent: "true"
    io.rancher.container.agent.role: "external-dns"

@alena thought this might be of interest to you as well … changes are strait forward but definitely not worthy of being pulled … just thought I’d share my use case. Most of the meaningful changes are at https://github.com/bloomapi/external-dns/commit/791c563a1bd5427eaf02a4dc2a9b07698cb92d21#diff-b3a9f1c1b5d5600f4924efe942b13fb0R67 - note one issue I hit is that some longer service + container names are longer than route53 supports with this experiment.

@jschilperoord @untoldone it does make sense to provide different options for picking up fqdn target ips. But i would suggest a slightly different way of implementing it. Instead of making custom changes to external-dns and providing a template using out-of-main-stream image, we can do this:

  1. Each way of building a) ip set for fqdn b) fqdn, should be represented as a plugin in external-dns repo.
  2. Pass desired plugin as a CMD option in external-dns template. Default way would be the current way - a) chose host ip as a target ip address b) form fqdn as servicename.stackname.environmentName

Its already been a lot of users’ code contribution to Rancher external-dns branch (mostly enabling more providers support). So we always appreciate new PRs for the project, especially ones covering the use cases that are common to many users which I believe this case is.

That makes sense. I did this as a super quick hack to get my scenario working but if I find more time, I’ll see how I can turn it into a more reusable chunk of code.

Another note on my scenario here – I was looking for container level resolution in the names. The goal was to be able to directly connect to services for setup + administration. For example, I have a single postgres container and I want to direct connect to postgres securely (e.g. over VPN) to deploy our newest db migrations. Hypothetically if I had a docker image that elected a master + follower PG container, the existing code would resolve the service to 2 IPs even though only the master would be writeable.

This said, there may be completely different solutions that make sense… e.g. perhaps the internal rancher DNS would be another place to put what I’m trying to accomplish… or perhaps something different altogether would involve a replacement for my current VPN-ing.