How does KubeCtl work with Rancher Installed on Docker?

Setup:
Installed Rancher via: Rancher Docs: Manual Quick Start

Setup Cluster “The-Lab” and added three nodes. docker-kube01,02 and 03.
I know… Super original on the names etc…

I have two “workloads” configured and I can see them deployed on the nodes properly per the provisioning instructions. I can also access the web ui’s of the containers so I know everything is deployed and working.

I have a single pod that somehow got botched as I was testing various deploying of one of the pods. It seems like according the the namespace, the pod exists on one of the nodes but can’t be deleted because the pod doesn’t actually exist on the node anymore…

Most of the directions I’ve been working with suggest doing something like kubectl get pods --all-namespaces to see if the pod exists and if it doesn’t if was probably deleted incorrectly. The instructions say to run another kubectl command to ‘force the deletion’

The Problem is with the first command. So based on what I understand from doing the installation via the manual install option, I opened a shell to the rancher vm in docker on host running rancher.
docker exec -it /bin/bash

(kubectl doesn’t show as an actual installed package on the various nodes as it is running k8s inside the containers on each node…)

So from the rancher container, I can run kubectl but for some reason, other than the administrative pods,

cattle-fleet-local-sytstem
cattle-fleet-system x 2
cattle-system
and kube-system.

the two pods I created for testing don’t show with that command. I looked for ways to run this against a specific node.

kubectl get pods --output=wide --all-namespaces --field-selector spec.nodeName=docker-kube01 (insert each server for the node name…)

response back is: No resources found.

Clearly from the UI, these pods exist but I seem to be running the command incorrectly, or need to figure out how to ensure I’m running this in the right cluster…
For Clusters, I have the standard that get’s created for the controller
and then The-Lab which is the one I created for operating these pods…

My suspicion is that it’s giving me results for the “local” cluster and not for “The-Lab” cluster…

Any tips for how kubectrl should be working in this configuration? How to your have the commands run against a specific cluster etc…

Thanks in advance.

if you execute the command below and does the Node column show docker-kube01

kubectl get pods --output=wide --all-namespaces

So I assume you mean to run this instead the Docker Container running the Rancher server from the manual install and not from the root shell on that server.

Running inside that machine I don’t get anything other than “local-node” in the results.

I assume I am seeing the 5 names spaces from the default “local” cluster that is created when you do the manual install. This is why I’m questioning the right location to run kubectrl from. Should I be running it from that docker container’s shell? If so, do I need to use a different parameter to have it display the other cluster? Both clusters are controlled from this docker container.

The confusing part of the Rancher family of products is the naming and the interaction of all the other products.

  1. Rancher is the UI for managing the Kubernetes cluster. The installation can be stand alone or part of the Kubernetes cluster.
  2. The Kubectl is a Kubernetes tool which need to kubeconfig file (basically your user credential to the kubernetes cluster).
  3. where to run? as long as you have the kubectl and kubeconfig file and access to the kubernetes cluster then it can be anywhere.