Service with type NodePort

how to create a Service of type NodePort in the Rancher UI?

I am running rancher 2 on a bare metal cluster and have to expose some deployments (redis, mongodb) to the outside world over tcp.
I am fairly new to k8s and after some research I assume that the only way expose a tcp port for a deployment on bare metal is by using a service of type NodePort.
My Idea is to use an external load balancer to balance between all hosts on that specific port to get one ip to connect to from my other infrastructure

Is this a suitable way of doing what I want or are there better ways? If this is the way to go, I have to create a such a service in my cluster, but I found no way of doing it in rancher. The only way I got it working is by using kubectl directly.


You can use ClusterPort on the deployment that’s exposed by an Ingress with hostname routing.

But! The ingress randomly make you wait 5 seconds until it is routed to the correct pods.

Also! NodePort on Rancher 2.0 doesn’t seem to work either! If you need to use NodePort to connect to external loadbalancer now and doesn’t care much about Master’s HA then, I’d recommend just using Kubeadm…

Good luck

Isn’t the ingress with hostname routing only working for level 7 and not for tcp connections?

I was able to get NodePort working by deploying the following via kubectl:

kind: Service
apiVersion: v1
    name: redis-outside
    type: NodePort
        app: redis
    - name: redis
      protocol: TCP
      port: 6379
      targetPort: 6379
      nodePort: 30000

this makes redis available on all nodes on port 30000. I can then use my own loadbalancer to balance between all nodes but have not yet tested out how well this is working.

But I did not find a way to configure this type of service in the UI.

So your recommendation would be to use kubeadm instead of rancher2 or am I missing anything?


I have found out that rancher deployed ingress-nginx in its default configuration and after moving the namespace to its own project I was able to configure the tcp-services ConfigMap via rancher ui. This way its possible to expose tcp ports which are load balanced across all nodes similar to rancher 1.6.

Can someone from the rancher team confirm if this is a way to do tcp load balancing or if this could have any side effects for the cluster?

we did the ports also over the services but we used this to have a VIP instead of a external loadbalancer