Rancher is deployed on AWS, but the cloud provider is “rancher”. The reason is our deployment is cloud-agnostic, but in this case I’m deploying rancher/k8s into an AWS VPC for dev purposes.
So couple problems. I’m adding a host as “Custom”.
- Rancher can’t figure out the hostname. It says localhost. What’s the right way to configure an EC2 host in this scenerio?
I’ve used hostnamectl set-hostname blah
Then added to /etc/hosts
127.0.0.1 localhost blah
Still doesn’t work. Update: I figured it out by luck, I added another entry to etc hosts “public_ip blah”.
Prob 2. After the EC2 host is added the kubelet-unscheduler container constantly restarts. The error log refers to not being able to determine the host id from the provider.
So, I added the host again with the UI, and #4 in the UI says
specify the public IP". Well instead I used the local IP of the EC2 instance. That seemed to do the trick.
Just overall confused about this behavior. Am I doing something considered wrong or bad practice? Perhaps it isn’t common to deploy rancher on AWS and use cloud-provider=rancher? So I’m running into some side effects of that?
Thanks