koev
May 29, 2018, 9:44am
1
Hi,
I have a testcluster with 2 nodes, 1: etc/control 2:worker
I saw that my pods first got deployed to the etc/control node. How can I change this behaviour? Of course I can taint them, but I don’t really want to do that manually, cause this should be done by the node template.
How do you do it?
Just bumped into the same problem.
I believe this is a known issue that we are addressing
opened 06:59AM - 16 May 18 UTC
closed 06:30PM - 08 Jun 18 UTC
kind/bug
area/host
priority/0
**Rancher versions:**
rancher/server or rancher/rancher: 2.0.0
rancher/agent o… r rancher/rancher-agent: 2.0.0
**Docker version: (`docker version`,`docker info` preferred)**
17.03.2 ce
**Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)**
centos7.2
**Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)**
virtual machine
**Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB)**
single node rancher
**Environment Template: (Cattle/Kubernetes/Swarm/Mesos)**
rke - k8s 1.10.1
**Steps to Reproduce:**
1. Create a custome cluster with worker and etcd+control node seperated.
![image](https://user-images.githubusercontent.com/23303886/40101010-9c99c166-5918-11e8-9887-316e181c8b75.png)
2. Deploy a deploment such as this
![image](https://user-images.githubusercontent.com/23303886/40101090-e0f763c2-5918-11e8-8166-f7f51cdead03.png)
3. Scale the wordload. You can find that the new instances were created on the etcd+control node.
![image](https://user-images.githubusercontent.com/23303886/40101259-796d5c06-5919-11e8-8512-1fcf9f07a6e3.png)
koev
May 31, 2018, 9:17am
4
I am solving this temporary with the following node patches.
kubectl patch node -p ‘{“metadata”:{“labels”:{“node-role.kubernetes.io/master":" ”}}}’
kubectl patch node -p ‘{“spec”:{“taints”:[{“effect”:“NoSchedule”,“key”:“node-role.kubernetes.io/master ”}]}}’