Installed Rancher via: Rancher Docs: Manual Quick Start
Setup Cluster “The-Lab” and added three nodes. docker-kube01,02 and 03.
I know… Super original on the names etc…
I have two “workloads” configured and I can see them deployed on the nodes properly per the provisioning instructions. I can also access the web ui’s of the containers so I know everything is deployed and working.
I have a single pod that somehow got botched as I was testing various deploying of one of the pods. It seems like according the the namespace, the pod exists on one of the nodes but can’t be deleted because the pod doesn’t actually exist on the node anymore…
Most of the directions I’ve been working with suggest doing something like kubectl get pods --all-namespaces to see if the pod exists and if it doesn’t if was probably deleted incorrectly. The instructions say to run another kubectl command to ‘force the deletion’
The Problem is with the first command. So based on what I understand from doing the installation via the manual install option, I opened a shell to the rancher vm in docker on host running rancher.
docker exec -it /bin/bash
(kubectl doesn’t show as an actual installed package on the various nodes as it is running k8s inside the containers on each node…)
So from the rancher container, I can run kubectl but for some reason, other than the administrative pods,
cattle-fleet-system x 2
the two pods I created for testing don’t show with that command. I looked for ways to run this against a specific node.
kubectl get pods --output=wide --all-namespaces --field-selector spec.nodeName=docker-kube01 (insert each server for the node name…)
response back is: No resources found.
Clearly from the UI, these pods exist but I seem to be running the command incorrectly, or need to figure out how to ensure I’m running this in the right cluster…
For Clusters, I have the standard that get’s created for the controller
and then The-Lab which is the one I created for operating these pods…
My suspicion is that it’s giving me results for the “local” cluster and not for “The-Lab” cluster…
Any tips for how kubectrl should be working in this configuration? How to your have the commands run against a specific cluster etc…
Thanks in advance.