WebUI "forgets" configuration

Hi, I am currently searching for a kubernetes distribution to replace our small swarm cluster and I stumbled upon old RancherOS, which was exactly how I imagined I would want to manage a cluster of containerized applications. From there I jumped to newer Rancher or rancherd for managing kubernetes clusters and again, it looks exactly how I imagined optimal cluster management to look like.

Im still on the surface of its capabilities, but I am already fighting something that is either annoying bug, or something I dont understand.

When i deploy few test pods with nginx based on this, theres no published port, so logically i went to edit the deployment and added port 30080. And theres where i found first problem.

Deployment.apps "nginx-deployment" is invalid: spec.selector:
Invalid value:  
v1.LabelSelector{MatchLabels:map[string]string{"workload.user.cattle.io/workloadselector":"apps.deployment-testapps-nginx-deployment"}, 
MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

I googled a bit and it looks like some bug in kubernetes where some fields are immutable or something, i skipped that by cloning the deployment and adding published port before creating it, worked like a charm.

Then came second problem, which I cant wrap my head around. Right after the “new” deployment, I see the endpoint / published port in the list of deployments. But when I go to edit the deployment, for example to change replica count, the published port is missing, not always but most of the time.

So Imagine in time of stress when you want to increase the amount of replicas, and you dont check that the published port is still there, BAM, you are offline. Even if the app still works, you have to check your backup yaml files, because the port is not there even in yaml edit. This is unacceptable in production system.

I installed the Rancher and kubernetes cluster by going step-by-step in the official How to install on Linux documentation, there was no error, I really dont think that I did something wrong during the installation.

I am using Rancher 2.5.7 on Debian buster, installed kubernetes is apparently v1
18.16+rke2r1.

I would imagine that rancher is production ready, but I cant imagine how to work around these random configs disappearing, so … what am I doing wrong? Thank you.

I played with it a bit and I figured out replicable scenario. If you edit the deployment
only as Config, or
only as YAML,
then its working fine.

But if you choose Edit Config, then scrool down and choose Edit as YAML, you get error into chrome console and even if you dont change anything and choose cancel, the NodePort is gone.

Workaround: Recreate the NodePort based on its name before saving it. Also you have to save it, cancel wont bring it back. The nodeport seems to keep working, but isnt shown in the list of deployments until you recreate and save it.

I can probably make a bugreport now.