AWS Host ip incorrect in Rancher UI

Running the latest version of the agent and UI, The hostname is correct but the ip is incorrect. Here is a screenshot, the ip should show up as 10.0.182.214 but instead shows up as 10.0.75.231, I have seen this on all hosts recently.

Thanks

  • Trevor

Can you confirm if this is using docker-machine or did you create a host in AWS and then ssh in and run the rancher/agent command from the ā€œAdd Customā€ page.

This was an existing AWS machine running the custom command from the ā€œAdd Custom pageā€ In previous versions of Rancher I did not see this issue, only recently.

Thanks

  • Trevor

I have a similar issue. I am setting up a host using an Auto Scaling Group from a Launch Configuration in AWS. I run the rancher/agent command using user data at startup. The Rancher Agent start successfully, but the host is registered in Rancher using the internal AWS IP rather than the public IP.

@hannes_brt and @tbossert sound like they have opposite issues.

In @tbossertā€™s issue, the name of the host in the UI is showing the private IP and the IP thatā€™s registered is the public one. @tbossert - You can add the -e CATTLE_AGENT_IP=<private_ip> to set the private IP of the host. Rancher attempts to pick the correct IP and we typically default to the public one. Not sure what would have changed to have caused it to change which IP it would pick.

In @hannes_brtā€™s issue, the registered IP is the internal one, but he wants the public one. I honestly havenā€™t tried using the Auto Scaling Group in AWS. Are you sure itā€™s the internal AWS IP and not the docker0 IP?

Thanks Denise, I believe you are correct, I was using autoscaling groups and in a recent config I did not explicitly tell AWS to not assign a public ip. I have since created a new config and hopefully this will resolve it without needing the pass the private ip env variable.

  • Trevor

Hi @denise. I have same issue @hannes_brt.
I trying configure AWS AutoScaling group and Route53 to hosts auto created, but Rancher use internal host IP.

I need manually set io.rancher.host.external_dns_ip label to work. The problem is, when host are create by AWS AutoScaling I cannot set this label.

To get EC2 public ip we can use instance metadata (aws docs):

Do you have any suggestion to do it works with Rancher?

Iā€™m not too familiar with autoscaling groups in AWS, but how do you launch the rancher/agent in the host after itā€™s created?

Hi @denise, in this moment I do not using AutoScaling. I create a Host manually using Rancher AWS interface .

Hi all.

I resolve my problem with a cloud-config script:

`#cloud-config
repo_update: true
repo_upgrade: all

packages:

  • htop

runcmd:

write_files:

  • path: /var/tmp/cirrus-init.sh
    permissions: '0644ā€™
    owner: root:root
    content: |
    PUBLIC_DNS=$(curl -s checkip.amazonaws.com)
    HOST_LABELS=ā€˜io.rancher.host.external_dns_ip=ā€™$PUBLIC_DNS
    docker stop $(docker ps -q -f name=rancher-agent)
    docker run -d --privileged -e CATTLE_HOST_LABELS=$HOST_LABELS -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.0.1 http://RANCHERHOST/v1/scripts/XXX:XXX:XXX`

I hope itā€™s help someone.

2 Likes

I am also using AutoScaling groups. I think this example of the Cloud-config script will be helpful.

I can see how using an AWS auto-scaling-group would be useful in creating a set of ā€˜baseā€™ hosts auto registered to the Rancher console, but presumably thatā€™s not entirely useful once you have started deploying containers. What I mean is, if you set a scheduling policy for your ASG so that you are not clocking up costs when the hosts are not required, those instances will be terminated and thus everything on them is gone ?

Are you using ASGs without a scheduling policy or am I missing something here ?

Since I do not store persistent data on any of the instances, if an instance fails then Rancher simply moves the containers over to another instance. I have been testing this quite a bit. I avoid using side kick containers if I can, unless they are redundant. I have a GlusterFS file server (not deployed from catalog but pre-existing) and a volume from that is pre-mounted on all my instances with the Launch Configuration from my autoscaling group. So no matter what instance my containers land on the data is there.

I hope that answers your questions

1 Like

Hi,

In case you are using CloudFormation - any chance to share it with us ? Iā€™m starting to write something similar to what you just describe.

Sorry, I am not using CloudFormation. But that would be a great idea. I made a few changes last week. Right now my setup is like this:

Hi all,

@Fraser_Goffin and @cloudlady911.

Some of my containers need share files. To avoid data loosing I use a Convoy-Gluster+GlusterFS storage. When a Host is no more needed in mycluster, the Rancher re-arrange containers and local data is not lost because is in NFS.

PS.: I maintain in my ASG a minimum of 2 hosts to give HA in GlusterFS.
PS.2: I use AWS Lambda function and DynamoDB to extra controller. When ASG shutdown EC2 instance, I receive a message and de-register Host in Rancher.
PS.3: I use Route53 service from Rancher Library to gain DNS LoadBalancer to my ASG Hosts.

+1 for maintaining state off the host where applications run and which may be terminated via your ASG policy at any time. We lean towards NFS rather than GlusterFS but only because we donā€™t have any direct experience of Gluster (and, when Amazon finally get around to releasing EFS we will probably use that).

Interested in hearing more about your notification approach in PS-2 to de-register hosts in Rancher, sounds like a nice approach.

Route53, yeah we wanted to take advantage of that integration to our corporate DNS but some of our EA folk prefer us to be cloud agnostic (which might end up ruling out Lambda as well, but I live in hope that common sense will prevail).

Regards

Fraser.

Iā€™m having an issue similar to the original poster. Our hosts reside in a private AWS VPC subnet and the IP being used is that of our VPCā€™s Nat Gateway. I see the advice about using CATTLE_AGENT_IP. Is there a way to set that using the built in host launching ability of Rancher or do we need to just launch our hosts manually and then set this environment variable when starting the agent?

@Matt_Welch You could launch the hosts in Rancher, and then if they have the wrong IP, you just re-run the agent command with the addition of the CATTLE_AGENT_IP.

I realize Iā€™m dredging up quite an old post here, but I wanted to check in to see if anyone everyelegantly solved this issue. The ā€œrancher hosts createā€ command (via docker-machine) would appear to have the tools necessary to understand what is needed with the ā€œā€“amazonec2-private-address-onlyā€ and ā€œā€“amazonec2-use-private-addressā€ parameters, but itā€™s still no go. Having to manually SSH to a host to set the CATTLE_AGENT_IP variable and restart the agent is extremely non-intuitive and quite a pain in the tush.