Using public IP instead of private IP for agent for multi-cloud setup

I am trying to perhaps do something that is not possible under 2.0… create a test multi-cloud cluster.

I am running in an issue where the cluster nodes use the private IP to communicate among themselves… but obviously this won’t work given a node is on Azure and the other on GCP. I tried the “-e CATTLE_AGENT_IP=” method to force the agent to use the Public IP but it does not appear to work.

I use a custom cluster in rancher 2.0 beta 3 and manually add nodes using the cli provided when you configure the cluster. Nodes show up buf fail to communicate. Any options?

Hi,

I have run into similar problems when trying to create a cluster on DO from a rancher server that runs on premise. It is able to provision machines, can even ssh into them (after some fiddling with my firewall), but there is more to it - maybe the port lists in https://rancher.com/docs/rancher/v2.0/en/quick-start-guide/ give you an idea. Once I saw this, I discontinued my attempts. Maybe establishing a VPN between the two sites might help, but this is currently out of scope for me. Currently, I’ll continue with a complete local setup and when I have some time, I try to log connections between the rancher server and the cluster to see how they talk to each other. Of course it would be helpful to have rancher server set up a VPN between sites like they set up clusters, ingress, pipelines etc., but probably we won’t have this before at least 2.1.