More complex dynamic DNS on EC2 RKE cluster

I have an RKE cluster deployed to EC2. We host several apps, some of which are pointed to our cluster by our own Route 53 hosted zones and some of which are pointed to our cluster by other DNS providers our partners use. Some apps have both Route 53 and partner-managed records.

I’m trying to figure out the best way to set up sensible DNS that will keep up to date as the cluster scales.

I’ve looked at external-dns, but it seems to try to autodetect the value in the ingress and configure DNS automatically. I’m not sure that’d work – not all the domains we have ingress for are in our own Route 53 console. It says you can filter for specific domains, but we have many projects and it seems difficult to have to ping someone for a cluster service change every time to adjust that list.

For now we’ve just set the IP addresses of the nodes into the DNS records. But this isn’t scalable as the cluster scales up new nodes or scales down existing nodes – it might cause DNS to start periodically failing when it resolves a now invalid IP.

If there is something that I could use which would take all of the worker node IPs and set them as one subdomain, that I don’t actually need to host a service on, like, then we could just CNAME all of the other records and tell our partners to do the same. Also maybe something else entirely is a better solution.

Any idea how to do this?

FWIW it looks like this might be tied down by an open bug:

The IP address of any given ingress in Rancher seems to only have one IP from the cluster, so something like external-dns can’t be configured against the whole cluster IP set for any domain. This just might not be possible in Rancher :frowning:

It seems the default approach is to put a cloud LB in front of the nginx ingress, but I’m not sure why that approach is so popular over DNS round robin as I don’t want to put an extra LB (and the associated monthly costs) in the traffic flow.