Hello,
my boss asked me for a recommended infrastructure hosting kubelets and storage.
As i understand it kubernetes is optimized for IaaS where the management of hosts and storage is only a click away. But the idea of introducing an IaaS to just provide for k8s seems overcomplicating things. Also it removes a lot of computing performance by adding another abstraction layer and virtual machines.
We want to host kubernetes on bare-metal-servers. This introduces the problem of provisioned storage which needs to be available on all hosts. Our preferred setup for this would be a SAN attached via fibre channel and on block-level available to all servers. Unfortunately we have not found a working provisioning solution for this type of storage, so we ended up in a dead end here.
My question is: has anyone a reliable setup for hosting k8s on bare-metal and having storage dynamic provisioned, which enables all pods to seamlessly switch kubelets?
Is our approach naive or somehow not recommended to follow?
Thanks in advance for any advice.
Rancher 2.x is a good solution for deploying Kubernetes on bare metal. Regarding provisioning storage, Kubernetes has built-in functionalities for this in the form of volumes. See the storage documentation of Kubernetes. One of the types of storage supported out of the box is fiber-channels.
Here are two discussions in the line of running on bare metal and issues faced, with their solutions:
1 Like
If you’re prepared to support bare metal, then you should be putting ROS in KVM instances to run Rancher hosts. Just use regular KVM since you’re already comfortable with bare metal. Bare metal docker/rancher/k8s is upgrade hell and testing hell, support and troubleshooting and all around hell.
@kucerarichard - care to elaborate? I’m not here to defend anyone but I’ve not felt being in hell particularly, but again, it’s not like I’m deploying on 500 hosts. I would be interested in knowing what problems others ran into so I can think with these and/or plan for them.
The OP actually does not give insight into the scope of the project. That could be useful to provide an appropriate answer - scope/size does matter.
@etlweather ah didn’t intend to attack your bare metal position. just my own scar tissue talking I guess, didn’t even read your answer, just went nuclear right away If you have a working bare metal solution then more power to you.
Our experience on bare metal for just a handful of hosts with Docker (UCP and DTR from Docker) was rather negative and destructive of team morale. Even though there is a lot of churn with k8s I’d still expect there to be much more support today (even without Rancher) than there was for us in our situation a couple years ago with Docker.
This was mean’t to be my last forums post, going to Rancher Slack channel. (pruning news sources).
I use successfully Rancher RKE and Rancher 2.x on bare metal servers of different types in a single small Kubernetes clusters. (mix of super micro servers, custom made chassis and desktops). We have different chassis because we have different needs in terms of GPUs. RKE has been fairly reliable to upgrade the version of kubernetes multiple times in the past 9 months. I don’t use a SAN but all the local drives in every chassis. Those drives are made available to kubernetes for persistence volume using Rook. Rook is very easy to set up. The volumes are dynamically created as needed.
Maybe I didn’t understand your requirements, but your SAN block devices mounted on your servers could be exposed the same way.
Otherwise, you may be able to tweak the iscsi external-storage plugin available here
That would create volumes in your SAN automatically when Persistent Volume Claims are made. I haven’t used that setup though.
Hi all, thanks for your contributions. Sorry for coming back so late, my notification settings need to be adjusted.
So far we are quite happy with our bare-metal test installation consisting of 6 hosts. The storage we use now is NFS, but i am eager to replace it though.
@etlweather: using fibre-channel out of the box would be the best, but afaics this is no provisioning, only manually providing persistent volumes.
@mimizone: thanks for the hint to iscsi external-storage, I will investigate further but on first sight it looks like a client-server installation with storage-traffic over network. This is exactly what we want to prevent.
I came across a vendor-specific solution for our HP 3par:
https://community.hpe.com/t5/Around-the-Storage-Block/Data-Persistence-for-Kubernetes-and-OpenShift-and-More/ba-p/6980397#.W3PPsiRfgUE
Maybe this can provide our requirements. I will report back.