I’m running 1.6.14 with one Master and one Worker on CentOS Linux release 7.4.1708 and
Docker version 17.12.1-ce, build 7390fc6
Sometimes the vNIC is lost, for that the docker0 bridge enter in blocking and disabled state.
Mar 8 17:47:05 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Mar 8 17:47:05 kernel: docker0: port 9(vethr686e0fc54f) entered blocking state
Mar 8 17:47:05 kernel: docker0: port 9(vethr686e0fc54f) entered disabled state
Mar 8 17:47:05 kernel: device vethr686e0fc54f entered promiscuous mode
Mar 8 17:47:05 kernel: docker0: port 9(vethr686e0fc54f) entered blocking state
Mar 8 17:47:05 kernel: docker0: port 9(vethr686e0fc54f) entered forwarding state
Mar 8 17:47:05 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Mar 8 17:47:05 NetworkManager[794]: <info> [1520527625.9558] device (vethr686e0fc54f): link connected
Mar 8 17:47:05 NetworkManager[794]: <info> [1520527625.9670] manager: (vethr686e0fc54f): new Veth device (/org/freedesktop/NetworkManager
When this happen Rancher is unable to connect with the worker node, and all is lost.
Surprisingly I’ve only found this report of a problem like mine. Sergio, seems that we haven’t followed requirements, or installation procedures at the same point.
Hope some expert will take a look at our scenario.
I know it’s a late reply, but I wanted to offer my $0.02. We’ve run into this more times then I’d like to admit. After getting distracted by all manner of troubleshooting, the problem was often caused by the Host somehow attaching itself to the wrong IP during an earlier configuration change. We fixed it by following this FAQ: