I was wondering if its possible to run both a Rancher Server and a Rancher Host on the same machine? I did some digging and googling but mostly no luck as most threads are for Rancher 1.x. I did, however, find this thread asking the same thing:
But unfortunately the link to the documentation for it seems to be dead / out-of-date as it doesn’t seem to have the relevant information (unless I’m missing something):
Essentially the idea is to be able to host a Rancher Server with a machine on-site, have it manage a cluster with VPS nodes (from AWS or DigitalOcean for example), while also having the ability to run some of the more CPU/Memory-intensive workloads on the same local machine on-site as well (CI builds, ML/AI training, monitoring tools, log aggregation, etc).
Would this be possible? If so, how would I go about this? My apologies if this is a duplicate question.
Thanks for the help, I was able to setup a Rancher server on Port 8080/8443 just fine and was able to setup a custom cluster and add the same machine as a custom node. Everything is looking good now, I was able to deploy a test NGINX container and it seemed to work fine.
However, I have a slightly unrelated question now. I would like to add DigitalOcean droplets to my custom cluster now, would this be possible? The only way I know I can currently do this is, of course, setup a second cluster with DigitalOcean as the cloud provider but we would like to add droplets to the same cluster as our local machine.
Would it be as simple as running the same command I used to register the Rancher agent on my local machine on a DigitalOcean droplet I spun up myself? When I hit “Edit Cluster” I can find the same command I used to add my local node.
So I have gone ahead and tried to use the node registration command found on the “Edit Cluster” screen to add a DigitalOcean machine to my custom cluster, however, I am running into issues registering it.
I can successfully run the command to initialize the Rancher agent on the DigitalOcean droplet, however, it gets stuck registering it to kubernetes. Running docker logs on the kubelet container gives me the following (I condensed the log and removed the constant polling for the node):
I0120 20:47:13.811692 2850 trace.go:116] Trace[10524452]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/kubelet.go:458 (started: 2020-01-20 20:46:44.782975813 +0000 UTC m=+180.757040931) (total time: 29.028672775s):
Trace[10524452]: [29.028672775s] [29.028672775s] END
E0120 20:47:13.811715 2850 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: an error on the server ("") has prevented the request from succeeding (get nodes)
I0120 20:47:13.811800 2850 trace.go:116] Trace[304172272]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/kubelet.go:449 (started: 2020-01-20 20:46:44.792318443 +0000 UTC m=+180.766383531) (total time: 29.019469958s):
Trace[304172272]: [29.019469958s] [29.019469958s] END
E0120 20:47:13.811806 2850 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
I0120 20:47:13.811876 2850 trace.go:116] Trace[2133104024]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46 (started: 2020-01-20 20:46:44.791213562 +0000 UTC m=+180.765278717) (total time: 29.020653841s):
Trace[2133104024]: [29.020653841s] [29.020653841s] END
E0120 20:47:13.811885 2850 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: an error on the server ("") has prevented the request from succeeding (get pods)
E0120 20:47:13.893571 2850 kubelet.go:2263] node "do-tor1-2vcpu-4gb-dc1" not found
I0120 20:47:13.952081 2850 trace.go:116] Trace[1871636622]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2020-01-20 20:46:44.936097671 +0000 UTC m=+180.910162829) (total time: 29.015922626s):
Trace[1871636622]: [29.015922626s] [29.015922626s] END
E0120 20:47:13.952107 2850 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.CSIDriver: an error on the server ("") has prevented the request from succeeding (get csidrivers.storage.k8s.io)
I0120 20:47:13.976867 2850 trace.go:116] Trace[1259173617]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2020-01-20 20:46:44.948063661 +0000 UTC m=+180.922128736) (total time: 29.028771679s):
Trace[1259173617]: [29.028771679s] [29.028771679s] END
E0120 20:47:13.976896 2850 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: an error on the server ("") has prevented the request from succeeding (get runtimeclasses.node.k8s.io)
I0120 20:44:27.952356 2850 kubelet_node_status.go:70] Attempting to register node do-tor1-2vcpu-4gb-dc1
E0120 20:44:27.988032 2850 kubelet.go:2263] node "do-tor1-2vcpu-4gb-dc1" not found
W0120 20:44:29.711719 2850 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
E0120 20:44:29.952886 2850 kubelet_node_status.go:92] Unable to register node "do-tor1-2vcpu-4gb-dc1" with API server: Post https://127.0.0.1:6443/api/v1/nodes: read tcp 127.0.0.1:42838->127.0.0.1:6443: read: connection reset by peer
E0120 20:44:30.405674 2850 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
It seems like networking is an issue here. Is it possible to have local machines and cloud machines on the same cluster?