Why Rancher 2 demands so much memory?

I installed a rancher 2 server on 4G RAM, it get less and less stable after a while. I guess it is because I am a poor man, provision rancher too little RAM. (The documentation says 16G)

Anyone else experience the similar, and I am curios why Rancher 2.0 is so memory hungry, wondering if this will be a downside for running rancher in long run.

Thanks a lot!

I think it depends on your cluster size and the amount of clusters you want to manage. I am running a small Kubernetes Cluster with only a few nodes and a 4GB RAM Rancher 2 server is working fine here.

@p7k how long has you get it up and run?

I run 4G for rancher server separately, another 4 nodes as k8s cluster. It has been working quite nice for about a month. However, after the new year, it became unusable, keep restarting every minute. :frownin

My Rancher server is running since a few month now. It consumes 1.2GB of memory most of the time. Did you check that it is really the rancher process which is growing so much? Was it running the whole time without a reboot? I have performed some updates so my rancher server had been restarted some times and it has never run a lot of weeks without a reboot. So maybe i was just lucky.

No, I didn’t shut it down. The server container just restart itself when I do something on the Web UI, typically when I do some “heavy” work on my cluster, e.g. a “big” deployment.

I now tear down the entire node and make a fresh re-install, while increases ram to 6g. Let’s see what will happen after a while.

i had two rancher servers (both on VMs, one with a single cluster and another with three small clusters) running stable on 2G for quite some time. a few weeks after upgrading to 2.1.x (maybe this was not the reason - but i did not add any cluster objects), they ran out of memory and i had to give them 4G. this fixed the problem and after the restart everything went green again.

Thanks @remigius! I actually guess that it is problem starts from 2.1.x. Hopefully an “insider” from Rancher could say something about this.

The architecture of Rancher and Kubernetes is such that we end up having to cache much of etcd in memory in rancher server (this is a typcial k8s controller pattern). This can ultimately eat up a lot of memory.

Also, it sounds like you are running the “all in one” version of rancher via docker run rancher/rancher .... Note that when you run in this mode, we have to run rancher, the k8s control plane, and etcd all inside that container. Those all combine to eat a hefty amount of resources.

Thank you @cjellick! Yes, I did start the server with docker run rancher/rancher, however I have k8s controls plane and etcd run in dedicated nodes(VM), not sharing the the same container/VM . Are you saying that rancher duplicating all control plane/etcd instances into rancher server container?

Can Rancher just act as a thin proxy to route request from frontend to different k8s cluster? Or there is any plan to limit the memory footprint, at least not letting it go crazy when in a long running system.