Gitlab deploy from chart fails on timeout

Hi.
I’m trying deploy Gitlab using their chart. Problem is that it fails on timeout.
It seems that chart deployment is very very slow. For example chart first deploys job that create secrets. It should be done in few seconds, but even the job creation and execution is very slow. It means that installation of app fail due to timeout.

I also tried to connect to cluster from management host and deploy app from there using helm instal …
The “speed” of deployment is the same, although I can define --timeout to ridiculous number in order to installation to be successful.

Is it possible to check some logs from internal tiller, or logs from other mechanism that deploys charts?

I’m running rancher 2.2.5 on RKE v0.2.6 (HA install on 1 node).
I have created custom cluster on custom linux nodes from rancher, k8s v1.14.3 (2x etcd/control, 5x worker)

You can check Rancher pod logs and raise verbosity if needed. What are the specs of the nodes used for the HA install node and the cluster nodes?

Keep in mind that having 2 etcd nodes doesn’t add anything regarding being HA as described on https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/production/#count-of-etcd-nodes.

Rancher pods hold logs for deployment of chart apss?

All machines are custom Linux VMs on Proxmox hypervisor.
Rancher mgmt cluster:
etcd/control/worker (1 node)
Ubuntu 18.04.2, 4vCPU, 6GB RAM, Docker 18.09.8, RKE v0.2.6, helm 2.14.2, k8s v1.14.3

connected cluster created from rancher UI:
etcd/control (2 nodes)
Ubuntu 18.04.2, 2vCPU, 4GB RAM, Docker 18.09.8, k8s v1.14.3

worker (5 nodes)
Ubuntu 18.04.2, 4vCPU, 8GB RAM, Docker 18.09.8, k8s v1.14.3

Regarding node count for etcd, if we have 2 nodes and one of them fails, the cluster will still be available, or am I wrong? Can problem occur, when failed node change to be back up again? Please bare with me since I’m new in k8s world.

Hi, @michal.behun.

About the node cound for etcd, I’ll quote the documentation:

It is recommended to have an odd number of members in a cluster. Having an odd cluster size doesn’t change the number needed for majority, but you gain a higher tolerance for failure by adding the extra member.