ERROR ConsoleStatus - Failed to connect to master

Hello,

I am setting up Rancher HA at AWS using the instructions on the Rancher docs but only one node is active, the others generate the message "[main] ERROR ConsoleStatus - Failed to connect to master at <IP_ADDRESS>. First I had two nodes then added a third to see if it was a quorum issue, it does recognize the other two nodes in the pool but the message persists, only one is active. I did manually shut down the active node and another one picked up, so I’m not certain if this is how is supposed to work but I though the setup was active/active, instead of active/passive.

I appreciate your help, thanks

This is resolved,

Thanks

can you please share how you solve this ?
I’ve got the same error message right now.

I had exactly this issue in a local environment.

When adding a second rancher server on the same database, it gets registered immediately in the gui.
From within the second rancher server container, I can perfectly ping the other dockerhost on 10.10.31.11.
In the logs however, the message is repeated:

CATTLE_CATTLE_VERSION=v0.181.13

CATTLE_RANCHER_CLI_VERSION=v0.6.2
CATTLE_RANCHER_COMPOSE_VERSION=v0.12.5

[main] INFO ConsoleStatus - DB migration done
[main] INFO ConsoleStatus - Cluster membership changed [10.10.31.11:9345, 10.10.32.11:9345]
[main] INFO ConsoleStatus - Checking cluster state on start-up
[main] ERROR ConsoleStatus - Failed to connect to master at 10.10.31.11
[main] ERROR ConsoleStatus - Failed to connect to master at 10.10.31.11
[main] ERROR ConsoleStatus - Failed to connect to master at 10.10.31.11

SOLUTION: unblock traffic on port 9345 via iptables.
quick check telnet master docker host from other host on port 9345
If refusal -> this issue