Rancher stuck at "Launching instance..." on AWS but EC2 is all green

This is an absolutely fantastic product, thank you for putting effort into building this and making it freely available.

We have Rancher v0.37.0 running on an EC2 instance. We’re attempting to spin up Hosts in the same Availability Zone and Subnet as Rancher with a Security Group that is completely open. The instance will spin up and appear ready to go within EC2 but Rancher continues to say “Launching instance…”. Has anyone else experienced this issue?

I just thought on my way home (which I’ll test tomorrow) of running some traceroutes and netcat commands to see if they provide any insight, but I thought I’d post here to see if anyone has run into this before.

Something to check: are your security groups set up properly in AWS? If you let Rancher create a security group for you, then it should be fine, but if you use your own, then they must have ports open for things to work properly.

Thanks for the response. Once we assigned a public ip to the host being created it followed through correctly. Now we’re looking at our subnet structure because there’s clearly an issue there.

Why would you need a public ip? There are a lot of reasons to keep things on the private side. Can someone explain this to me? Thanks!

  • The docker-machine binary (in go-machine-service) needs to be able talk to the EC2 API to create the instance .
  • Once it’s created, it needs to be able to open a SSH connection to install and configure Docker.
  • It then starts the rancher/agent on the host, which opens a connection to the rancher/server container using the IP/hostname configured in “Host Registration”.

So it doesn’t necessarily have to have public IP address, if those paths are open.

For us, there’s something going on with the networking that we don’t have configured correctly to allow Rancher to communicate with the node. Making the node itself public seems to temporarily resolve it, but we’ll definitely not consider it production ready until we’re able to make the nodes private.

I’d be interested to hear how you get on with this @bkuhl — we’re using an EC2 instance for Rancher at the moment, and have it on a public IP, but with very strict security group rules assigned, so that it’s effectively locked down for requests specific IP addresses. That’ll become a little more messy when it comes to creating hosts on different cloud providers.

If you have multiple providers then the hosts will really need to have public IPs to create point-to-point IPSec tunnels between the network agents on each host.

There is not currently support for multiple IPs and determining which [source,destination] IP pair is the “best” combination to connect two hosts. So if the IPs shown on the host in the UI are not all mutually reachable you will end up with islands of hosts that can communicate only with some other hosts.