Migrate EC2 instances from Public IPs to Private IPs

Hi,
we have a running environment where all agents where registered through the Public IP of EC2. Due to the high traffic and traffic costs we need to migrate the hosts to use the private IPs.

We tested to add new hosts with the private IP in the CATTLE_AGENT_IP , however the healthcheck never came green and thus no loadbalancer could be added to accepts public traffic.

What is the best way to migrate that environment to use private IPs of the hosts. Some hosts must remain, so we can not replace all nodes.

Thanks,
Oliver

All hosts in an environment must be able to reach each other using the registered IP (from auto detection or CATTLE_AGENT_IP; whatever’s shown on the host in the UI). It doesn’t necessarily matter if they’re public or private, just that they’re reachable.

The healthcheck service not going green on new host indicates that the 3 other hosts chosen to check the healthcheck container on the new host can’t reach it. If you can’t make them all mutually-reachable then you need a new environment and eventual failover from old to new.

The new and the old hosts are in the same security group, so the can reach each other via public and private IP. What I recognized is that when I added two new hosts where CATTLE_AGENT_IP is the private IP then the container of these two new hosts can ping each other while the container on the hosts (registered via public IP) can not be pinged.

That should be possible right?

Is this issue maybe as described in the section "VMS WITH PRIVATE AND PUBLIC IP ADDRESSES"
http://rancher.com/docs/rancher/v1.6/en/hosts/custom/
The Note:
“When setting the private IP address, any existing containers in Rancher will not be part of the same managed network.”

?

All that matters is all the IPs the hosts in the environment are registered as are all mutually reachable. So private1 <-> private2 working but public1 <-> private1 not suggests this is not actually the case in your setup.

actually public <-> private is the case during migration. In general that would be an issue as we can spin up more instances to shift the containers. But when it comes to the loadbalancer which depends on the healthcheck the migration will have interruption. So we can not spin up the new loadbalancer on private ip and start shift the traffic from the old one which is registered via the public ip.

and as you wrote the healthcheck depends on three other hosts , so the downtime is not really deterministic as we have no control over the hosts which are in involved in the healthcheck.

Thanks,
Oliver