I have host with two network interfaces: 192.168.0.2 (nat), 192.168.1.2 (only local network). And I have public IP (1.1.1.1 for example) assigned to 192.168.0.2.
I want to make k8s setup secure and use only local network in order for it. But in the same time I want to make my services (that will be deployed inside k8s) accessible from Internet.
I am trying run Rancher Server in following way: sudo docker run -d --restart=unless-stopped -p 192.168.1.2:80:80 -p 192.168.1.2:443:443 rancher/server:preview.
And run agent with following command on the same host: sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/agent:v2.0.0-beta3 --server https://192.168.1.2 --token ... --ca-checksum ... --etcd --controlplane --worker.
But in this case Ingress controller (deployed by agent) fails to start, because 80 port already in use by Rancher Server.
As workaround I use following command to start Rancher Server: sudo docker run -d --restart=unless-stopped -p 192.168.1.2:8080:80 -p 192.168.1.2:8443:443 rancher/server:preview. In this case, Ingress controller can start on 80 port, because Rancher Server uses 8080.
Questions:
Maybe it is okay to make Rancher Server accessible from Internet and there is no need to use local network interface at all?
Is Ingress controller a part of worker or controlplane?
Can Ingress controller work on same host where Rancher Server deployed?
Can I point to use 192.168.0.2:80 for Ingress controller? Currently, it uses 0.0.0.0:80.
Is there ability to point Rancher Agent to use local interface (192.168.1.2) for communication between etcd instances, between kubelet and kube-api, etc?
It is okay that I haven’t firewall in front of cluster? Do I need to deny access to 6443 port (kube-api), etcd server port and other? Or it’s okay to make this things accessible from internet? As far as I understood all of this ports requires certs to use them.
imho it is ok to expose rancher server on the internet as long as you access it using HTTPS (encrypted - which is now enforced) and have a strong enough admin password
I suggest to separate your k8s cluster from rancher server - i.e. one machine with rancher server, and the other with a (simple) k8s cluster that has all cluster roles (etcd, controlplane, worker - as per your agent container arguments)
in my setup, I have an nginx server that is completely separate from my rancher setup which serves as a global reverse proxy to rancher and non-rancher services/sites.
if you want to access rancher server via a reverse proxy, you have to add some parameters to allow websocket traffic. fortunately, this is pretty much the asme as for rancher 1.x, which makes google search results appliccable.
not having a firewall setup on a machine that is directly connected to the internet is imho a severe security risk. the least you can do is enable ufw (if it’s an ubuntu machine) and limit incoming traffic on the public interface. even better would be to have ufw enabled on all machines and only open traffic that is necessary, I know this might be somewhat fiddely (if not managed by a configuration management system - e.g. ansible), but - again imho - safety is paramount. in the getting started guide for rancher 2.0 there is a comprehensive list of ports to open for master and worker nodes and if things go wrong, you can still temporarily disable ufw on internal interfaces to see whether this is the source of the trouble (but reenable it after solving the problem).
I am trying to put all components to one machine in economic reasons. In real production (not for pet project) I would prefer to separate Rancher Server from k8s.
I don’t think that having nginx outside of cluster is a good idea. Ingress controller is a part of k8s cluster provisioned by RKE. And I really do not want to have two things solving the same problem.
@braska Sorry, I did not read the description carefully enough - even the link I have provided mentions ip:hostPort:containerPort | ip::containerPort as allowed parameter formats…
While I would not exactly call this common practice I agree with the reasoning that it is safe to expose k8s API on public internet. It is protected using hard encryption and it seems that keys are exchanged in a tamper proof way. Still, I would not want to have such a setup myself. I have an nginx instance running separated from the rancher server and acting as a reverse proxy.