How to implement a custom load balancer / routing in front of environments


first of all, I have just started using docker and rancher and I really appreciate what I currently see and understand of it. So it may sound completely n00b what I am asking, please bear with me.

I am trying to setup a strangler pattern to slowly deprecate a legacy application. I have a 3-container setup that consists of a tomcat container hosting the legacy app, a aspnet core container that hosts the new parts and a nginx container that is configured to route the traffic to one of both applications depending on some parameters in the query string. The whole thing is readonly, so I only have to deal with GET requests. This environment only needs to provide a single port for http/s, which is the port exposed by nginx. All this is working so far.
In Rancher terminology I think this is called a stack, but I will call it MyApp (I’ll stick to that terminology in the rest of my question). I managed to setup and run MyApp in rancher in a single environment.

What I am currently trying to understand, is how I would model the following scenario:

I want MyApp to run in a production environment as well as in a staging and a dev environment as well as in mutliple demo environments (those are used to fine tune changes with specific customers before deploying the changes to production).
Is it possible to deploy such environments and how would I put a load balancer / reverse proxy in front of such deployment to enable the production / dev / demo scenarios?
I am trying to serve production from a specific domain, but also having the current demos be served from a certain domain that may notnecessarily be different from the production domain (maybe just a different path), so customers can evaluate before it goes live. I am trying to understand how to achieve this kind of “routing” to different rancher environments by maybe putting a reverse proxy / lb in front of my environments.

Thanks for your feedback


Thank you for the nice words about the product! We always appreciate users’ input and questions.

To answer your question, need to clarify - when you say “environment”, you mean:

a) Rancher environment, which is really a way to share deployments and resources with different sets of users:

b) environment as an application end point, which is really a stack in Rancher terms.

If the answer is b), you can use Rancher Load Balancer Service solution. So you create 3 stacks: myApp_prod, myApp_stage, myApp_dev, each having nginx/tomcap/asp_net services. You can really create all these services within the same stack, but lets keep it separate for clarity. Then create a forth stack - lets call it lb_endpoint. Inside this stack, create a service of type LoadBalancer, pick all 3 nginx services - myApp_prod/nginx, myApp_stage/nginx, myApp_dev/nginx, define routing rules - we support path and host name routing - so the traffic will get forwarded to one of nginx depending on the request host/path.
There is no need to publish nginx port to the host; Rancher Load Balancer service will access target services over Rancher managed ipsec network. So the only port you would have to publish - load balancer public port. That would be the entry point for all the user’s requests.

If the answer is a), LoadBalancer service can’t be used here as you can only balance traffic between the services that are the part of the same environment. So in this case, you would have to use some third party balancer to forward traffic to myApp_prod/nginx, myApp_stage/nginx, myApp_dev/nginx end points. Now the question: how to expose those endpoints to the LB deployed outside rancher network. For that, you can use our Route53 solution:

Every nginx service having exposed port, gets registered to AWS Route53 as a publicly available A record, so you would have setup the LB to balance between them.

Hope it helps.

Hi Alena,

thank you very much for your detailed and concise answer and the clarification of the terminology. I really like what you suggest for b), so I will go and try it this way. It seems to be a very straightforward solution that completely covers my use case. Thanks again for your time.

All the best.