We currently have a some containers that run and listen on port 80, and then Mesos picks a free port in the 31000-32000 range and maps this port to 80 on the container. This allows us to run many instances of this service without any port conflict. And we can easily grab the port exposed on the host from Marathon, so we can access the service provided by the container.
I’m open to suggestions on better ways to do this, or the Rancher way, but I don’t want to have to name each of the services differently or generate any kind of unique identifier on my own.
For this we really need to support the publishing random ports that Docker does by default. So
nginx:
image: nginx
ports:
- 80
The port 80 would be available on a random host port and you could run rancher-compose port nginx and it would tell you the port. This definitely has its limits. I think this is only really useful for development. I think what we would rather do in rancher is allow something like:
Then myapp.mydomain.com would just route to your app. The general idea of the above is that we abstract away from the user the host IP/port. In this paradigm it would work for development but also for a real production deployment the same approach code be done in a scalable way.
In my case, we would start one instance of “myapp” then scale it up/down over time. And we need to be able to access the service on a specific instance of myapp from time to time. Today I use the Marathon API to find the host and and the port exposed on the host. With what you are suggesting, if I just scale up myapp to add 3 more, how might I refer to the second one created? And after I’ve scaled up and back down to 1 a few times, how would I know the “name” of a specific instance of myapp to which I want to connect?
When you say “myapp.mydomain.com would just route to your app”, please show me an example. I want to access the myapp container from a Jenkins job running on some other VM in our internal newtork. I’m totally missing how this routing would work. Sorry if I’m missing something obvious, and feel free to send me to a doc to educate myself.
Also, is what you propose available today or on the roadmap?
The routing exists today. It is the Load Balancer (HAproxy) that routes traffic to one of your app instances. But there is no way in this setup, that I know of, that will let you specify which instance you want to talk to. There is support for having a cookie set once you establish connection with an instance, so that your subsequent requests ends up with the same instance, but that’s as far as it goes. (this is the Stickiness option when creating the LB).
What is missing is for the links and LB configuration to happen automatically based on labels on the service.
EDIT: links are just in, based on the new selector links feature, so it is only the configuration options that are still outstanding.
To get what you want, I imagine you could have additional rules in the LB that would route traffic to a specific… Ah, right. the LB targets services only, I don’t see the possibility right now to target a single container. Ok, I’ve got no ideas at the moment how to solve your use case…