We have been using rancher in airgapped environment without a proxy for over 2 years. This has worked greatly with creation/addition of new nodes to existing rancher provisioned custom clusters working 100% of the time. This was on version 2.3.3
We recently upgraded to 2.5.3/2.5.5 on rke kubernetes 1.19.* (we did and restore of v2.3.3 to new rke cluster before the upgrade) and on the surface everything looks ok but we started seeing issues with the node registrations.
After running the nodecommand, nodes will remain stuck in ‘registering’ state on the old ‘Cluster Manager’ but show as ‘Active’ on the new ‘Cluster Explorer’. We have seen cases where few clusters would work without issues after restarting/recreation of nodes but there is no consistent pattern to the issue. We basically have a sync issue between the UIs.
We have close to 20 clusters on this rancher cluster so migration to a new cluster is not something we are considering as we dont know how imported clusters on new clusters can be effectively managed plus it would mean losing all the app data attached to those clusters.
Is this a known issue ? I would really appreciate all the help we can get here.