Small k8s cluster for home has huge memory overhead

Hello rancher community :slightly_smiling_face: ,

I am quite new to the world of containers and container orchestration and I am eager to learn. I started low in the abstraction layers by building a few of my apps with Dockerfiles and docker-compose and I am sold. Now, I would like to migrate my home “production” workload within containers using a container orchestration tool.

After some research on k8s, I picked up Rancher 2.0 to deploy and manage my self-hosted k8s cluster. I have a small Proxmox hypervisor with 16 GB of RAM to deploy Rancher and k8s VMs on and I chose to run the following simple cluster setup:

  • rancher-server VM - 4 GB RAM - 2 vCPU
  • k8s control plane & etcd - 4 GB RAM - 2 vCPU
  • k8s worker - 4 GB RAM - 2 vCPU

I tried the same cluster setup with RancherOS and CentOS 7 VMs and I was facing the same issues:

  • memory usage reported by rancher-server: 0%
  • memory usage reported by the proxmox hypervisor: almost 100% for both k8s nodes

Is this normal behavior that the k8s containers almost consumes 4 GB of the node memory just for etcd and control plane?
Is this normal behavior that the k8s containers almost consumes 4 GB of the node memory just for the worker process?

I do not have a lot of memory left for my own pods in this kind of scenario…

Am I wrong with the approach? I feel like I missed something as 4 GB overhead on each node seems huge to me.

Thanks in advance for your help.

The memory shown in the UI is how much is reserved by containers, not how much is used.

RAM that is actually “free” is doing nothing but wasting power, so in general a Linux machine will always use almost all of it for something (cache, buffers, etc.). I don’t know if proxmox takes this into account but would guess it doesn’t, so it’s normal that all the memory is reporting “used”.