RKE upgrade of k8s from 1.18 -> 1.19 -> 1.20

I’ve seen the docs that talk about rke (without Rancher) upgrades of minor versions (1.18.6 → 1.18.16 for example):

That update was completely transparent as expected. However, that document doesn’t really provide any details about the expectations or process for a k8s upgrade between not minor versions such as 1.18 to 1.19 other than mention in the related link that seems to say the update must be within same patch series.

Is that upgrade process basically the same or is there some other process to move between k8s versions? Just update the kubernetes_version in the cluster.yml and rke up? For a 1.18 to 1.19 upgrade, what is the expected outage/impact? Similar question for 1.19 to 1.20. I’m also assuming that it does need to be done in two passes, and not straight from 1.18 to 1.20?

Just to update this in case anyone else searches for same thing. I can’t promise this is universal answer, but it was the case for us.

Applying update by changing ‘kubernetes_version’ first to latest 1.19 supported by the rke version, and then to latest 1.20. In both cases, the deployment was “almost” invisible. It did result in a lot of containers spontaneously restarting during the process – and not in a particularly clean manner – i.e. it looked to me like multiple nodes were happening at once and possibly not giving them time to let the existing replica sets/deployments settle back into a stable state.

However overall, it had very little impact.