I started this pod via kubectl from the official kubernetes docs (except the name, gitlab image, replicas: 1):
apiVersion: apps/v1 kind: Deployment metadata: name: pod-gitlab-1 spec: selector: matchLabels: run: pod-gitlab-1 replicas: 1 template: metadata: labels: run: pod-gitlab-1 spec: containers: - name: pod-gitlab-1 image: gitlab/gitlab-ee:latest ports: - containerPort: 80
I started the pod via kubeclt web gui:
kubectl create -f pod-gitlab-1.yml
Then the pod was running after 1-2 minutes but then I noticed it was directly terminated after initialisation and the cluster automatically has started a new one.
After that point I anymore wasn’t able to access the web GUI entirely because of 503 error response.
SSH login to cluster works.
New cluster configuration via:
./rke_linux-amd64 up --config rancher-cluster.yml
doesn’t progress after:
INFO [healthcheck] Start Healthcheck on service [kube-apiserver] on host [184.108.40.206]
I think it loops in some way and kills the cluster?
I have a nginx loadbalancer in front of the cluster that redirects to my 3 master nodes (loadbalancer has ssl certs, rancher not).
Is it possible that the new gitlab port spec blocks the access for the loadbalancer that makes a port 80 proxy to the main nodes?
How can I kill this pod without access to the WebGUI?
Thanks a lot